Integrate 2016 – highlights from the Microsoft product team sessions

May 11-13 is #Integrate2016 in London. I’m here with 4 Motion10 colleagues. Tomasso Groenendijk, Azure MVP, will do a presentation later during the week.

After arriving in the hotel¬† Tuesday evening (great that Integrate 2016 is in London, saves 9 1/2 hours of flying for us folks from The Netherlands), I soon found my colleagues Rob Fox and Eldert Grootenboer (thank you Whatsapp group) and we teamed up with the rest of the folks already drinking some beers in the hotel bar. After that we went to a restarurant nearby, where I think 50% of customers were integration geeks ūüôā Nice to talk to some old friends again! During dinner, Tom Canter convinced me that there’s not enough time in the universe to do full variation testing of functions. And as always, he came with the actual proof in real numbers. I’ll inform my current customer and have them allocate more budget ūüėČ

Early Wednesday, Eldert, Rob, Paul Baars and I gathered at the Excel registration booths. Soon after registering, it was really time for coffee and some breakfast snacks. The room filled up quickly.

The following is my view on the first 5 sessions, that were done by the managers of the Microsoft product teams.

Sharply at 8:45 Saravana Kumar welcomed all 380 of us. Great to see such a massive turnout.

Jim Harrer (Group Principal Program Manager) did his keynote after that.

Good to hear that:

  • There is a robust vision again that actually feels good
  • The development teams are growing; they are hiring engineers for both Logic Apps and BizTalk Server
  • The vision is not “all in the cloud” anymore, but a more balanced view on integration
  • BizTalk and Logic Apps will work really well together and make hybrid integration scenarios much easier, each handling their own specific integration styles
  • There’s a big bet on Logic Apps; it’s actually the technology behind Flow
  • The Microsoft integration folks don’t have a tunnel vision, but actually see the value of the other Azure services and building blocks as well, such as Machine Learning for predicting topics

During Jim’s talk, Jon Fancey came on stage a couple of times to demo some of the stuff Jim was talking about.

After Jim’s keynote, Jon did his session “BizTalk Server 2016: What’s New”.

Good to see that:

  • The BizTalk UI gets a refresh to look more like the Windows 10 UI
  • Logic Apps integration is getting more mature
  • The Enterprise Integration Pack will be there soon
  • End of 2016 BizTalk 2016 will GA; so finally SQL AlwaysOn

Offline, I was told that hybrid connections will be consolidated in a new gateway; stay tuned

After a short coffee break, Jeff Hollan and Kevin Lam got on stage to talk about “Powerful integration and workflow automation”.

Some good takeaways:

  • Logic Apps Designer in Vision Studio
  • Number of ootb connectors growing fast
  • New control flow elements and scopes, that make exception handling easier and make collections of actions possible
  • Debugging and tracking capabilities; Ties into Operations Management Suite

Between the sessions, talking to the key product team leaders became clear that the different teams have started to work together much better, actually resulting in re-use of code for for example business rules. Smarter investing in functions that can be used in different environments.

What’s also become clear is that the story for on-prem integration is not “Azure Stack” anymore. Actually, it has been removed from the roadmap document now. On-prem integration is BizTalk Server.

The last session for the lunch break is by Jeff Hollan on “Advanced Integration Scenarios”.

Key takeaways:

  • Integration patterns for Logic Apps are getting much deserved attention
  • Not only happy flows; Scopes greatly help here
  • Visual Studio experience now also for Logic Apps which makes ALM much better
  • Release management gets more mature

Directly after lunch, Jon Fancey en Kevin Lam did their¬†talk on “Enterprise Functionality Roadmap (B2B/EDI)”.

Key takeaways:

  • EDI and other B2B artifacts now much better thought about and integrated. Schemas, Maps, Partners, Certificates and (soon) Agreements well adressed and nicely containerized.
  • VETER pipeline is back, but now in the right PaaS architecture (a.k.a. NotMABS)
  • Flat file parser incorporated
  • Good old BizTalk Mapper is the transformation tool (because we all know it that well)
  • Schemas and maps are compatible with BizTalk Server
  • Trading partners and Agreements will have backward compatibility (working on import capability)
  • The ERP and Database systems supported b yBizTalk Server will also be supported by the Enterprise Integration Pack, so therefore will become integratable by Logic Apps.

Microsoft Flow and API Management are next sessions by the Microsoft product team members. Flow already has been covered extensively by the #FutureOfSharePoint bloggers. Just check these blogs out. API Management is a topic by itself. If exciting things get announced during Vlad’s session, I’ll blog about that later.

So far my personal highlights from the product team presentations. Hope you enjoyed it.

Cheers, Gijs

My take on the Gartner iPaaS MQ 2016

Yesterday, Gartner released its Magic Quadrant (MQ) on Enterprise Integration Platform as a Service 2016.

The strategic planning assumption this is based on reads “By 2019, iPaaS will be the integration platform of choice for new integration projects overtaking the annual revenue growth of traditional application integration suites on the way”.

I think they’re right.

Microsoft did not make the Leaders quadrant this time. This is mainly because of the fact that the 2016 MQ is based on cloud services that are generally available (GA) and the only service available from Microsoft today in this regard is Microsoft Azure BizTalk Services (MABS). Which is of course far from complete as we all know. And it is based on an architecture which has been rendered obsolete by now, with the arrival of Azure App Service.

The relatively good news is that Microsoft did make it to the Visionaries quadrant, but they still need to let IBM, Oracle and SAP in front of them. That’s not so good.

My take on all this is

  • Gartner correctly positioned Microsoft in this years MQ for iPaaS based on what’s actually available;
  • We should quickly forget about MABS and start looking forward (Microsoft can’t afford to make another architecture and delivery screw-up like with MABS);
  • Microsoft needs to quickly release to the public a stable first version App Service. I really hope Q2 will indeed be GA time for App Service with Logic Apps and that it functionally delivers on its promise;
  • Microsoft needs to strongly position App Service as an Application Platform as a Service but at the same time strongly position Logic Apps with API Apps and the Enterprise Integration Pack (or whatever it will be called at GA time) as the Enterprise Integration Platform as a Service. Customers see them as two different things (although I think that will change in the future, see my earlier post on this).

I strongly believe in App Service, now let’s make sure that Microsoft and System Integrators nail the Ability to Execute as quickly as possible and kick some Mule, Dell Boomi, Informatica and SnapLogic @ss. The strong Azure (Integration) community should use its forces not only to make sure that the world knows about Azure and how to use it, but should also keep on providing the best real-world¬†feedback to the product teams so that they continually make the right choices with regard to backlog prioritizing. I want this to be in the upper right corner in the iPaaS MQ for 2017 and it should be, with all the efforts put into it right now.

We all know CIO’s take Gartner seriously, so Microsoft (and System Integrators) should take Gartner seriously as well.

Cheers, Gijs

Azure Stack use cases

On January 29th, Azure Stack went into Technical Preview.

Update July 14th 2017: Azure Stack is now generally available.

Having discussed Azure Stack with several of my customers in the past weeks,¬†I’ve come to the following list of potential use cases for it (in no particular order):

  • Private cloud environment (duh!): Host your own private cloud with (more-or-less) the same capabilities as the Microsoft Public Azure Cloud. You can maybe even organize visits to your cloud data centers ūüôā
  • On-ramp to public cloud: Gently try Azure in your own private environment before you migrate (parts of your) solutions to the public cloud without having to re-architect!.
  • Capex development & test environment: At fixed capex cost, give your development team an environment in which they can code and test. Then deploy it to the public cloud (or private, or hybrid; whatever you want)¬†without having to re-architect!
  • Hybrid cloud: Create hybrid cloud solutions, based on the same Azure architecture. Use your private cloud part of the hybrid architecture for stuff you don’t want in the public cloud. Use public cloud for all stuff that can go there. Mix-and-match,¬†without having to re-architect!
  • Cloud bursting: Run things mainly in your private cloud, and use the public cloud to offload (parts of your) workloads to when there are (seasonal) peaks in your load.
  • Exit strategy insurance: Have the comforting¬†feeling and insurance that when you for some reason or the other don’t like using the Microsoft Public Azure Cloud anymore, you can just migrate your solutions back to your private cloud¬†without having to re-architect!

Just my $0.02 of course.

Cheers, Gijs

Anarchytecture

Cloud. A euphemism for the Internet. Internet. A virtual community without any governance. Anarchy! How can we still make sure that this keeps on working?

For many people this may come as a surprise, but for most of us it won’t: The physical world has been divided in countries with each having their own government / governance in place and the Internet is one big global virtual world. The Internet gurus are all convinced that this should never be tampered with. The Internet is self-governing.

Of course there are some principles to which the Internet adheres and within which’ boundaries it further develops. Global agreement on for example domain architecture was necessary. The right choices that have been made in the beginning turned out to be very valuable. Once in a while we do come to a grinding halt however. Think about the limitations of IP numbering. For once and for all let’s not forget: when you, as a developer, think “uhm, that should be big enough”, just multiply it with at least a 1000 from now on. Just as a side note. ūüôā

Architecture has been invented to be able to build a foundation that can keep up with future developments on it. Otherwise, stuff implodes. But especially in IT, architecture means “being able to cope with change”. A good architecture can prove its value for years to come. Until the next possible paradigm shi(f)t.

So, is architecture needed? Yes. Can it work in an anarchistic environment like the Internet? For sure. At least, when you¬†take into consideration¬†that you can always be surprised by rebels who don’t care about your architecture and just spray your artwork with graffiti. Or build a terrace on your flat rooftop. By preparing for the worst, at acceptable cost, an architecture can sustain fine. But, like I said before, a next big invention in IT can for sure start shaking the foundations. For example, a bit that can be 1 and 0 at the same time is something we have to carefully think about before we start deploying solutions built on that paradigm massively.

I’m convinced that with the current methods of developing distributed solutions we are well on our way to build¬†fine, architecturally sound applications that can be deployed in the anarchistic clouds. Microservices¬†architecture is an excellent way to thrive in this chaos. Microservices architecture¬†is anarchytecture! So, what are DevOps then, actually?

Cheers, Gijs

p.s. Thank you Skunk Anansie for the inspiration for this blog post and hence its title.

The importance of API design-time governance

We all know the importance of design-time governance in SOA architectures. But how about design-time governance in the new API economy? An economy that’s going to boom. That is already booming while you were sleeping.

People tend to forget about mistakes made in the past and make the same mistakes again during paradigm shifts. New paradigm, same mistakes. Are we all that stupid? Or is it a generational thing (nah, we know better)? Or cultural (let’s first concentrate on happy flows and get the stuff working asap; now let’s put it in production :-)).

The same is happening now. I foresee¬†API spaghetti. And, it’ll get far worse this time. You know why? APIs are for the business. SOA was for techies. Techies tend to at least think a little bit about the longer term behaviours. At least, some of them. The real developers I call them. But business people, responsible for exposing and consuming APIs in order for them to be able to build agile business processes that enable quick wins and sometimes strategic benefit? Will they worry about re-usability? Will they worry about granularity of APIs? Sustainability?¬†I don’t think so.

So, we will still need an ICC (integration competence center) that takes care of guidelines around how to build and use APIs, how to expose them, how to document them and how to make them re-usable and findable. And¬†secure and sustainable.¬†Maybe we shouldn’t call it ICC anymore. But call it “ACC”. The API Competence Center.

What would such an ACC do? Here’s some guidelines:

  • Design, document¬†and¬†guide the use of¬†design principles (still a must read: “Principles of Service Design” grandmaster Erl)¬†for development of APIs;
  • Help the business make the right, sustainable¬†business process solution decisions;
  • Make sure that the right Application Platform as a Service functions are used;
  • Educate the organisation on the right development and use of APIs;
  • Make sure we indeed are building Microservices and not silos.

Let’s not put this stuff in documents, but in the API Management portal. With managed meta data. So that we can indeed find our APIs. And we can use the information at our fingertips, not buried somewhere in a document.

I’d say: If you’re serious about API’s, let’s get serious about the ACC.

Cheers, Gijs

Safe Harbor flip-thinking

European privacy laws turned out to be incompatible with the laws in the United States. Safe Harbor can therefore not be applied in European countries and now us Europeans are¬†all obliged to move our data to European clouds. Aren’t we forgetting about the real important stuff here? Aren’t there more important aspects to privacy related issues? Such as identity theft?

Personal information gets stolen everyday and the incidents we hear about in the news are getting bigger and bigger. Millions of records are stolen each day. At the moment the amount of hacked data is so enormous, that such stolen information is only worth about $1 per person. And in the meantime we worry about Safe Harbor…

One of the Safe Harbor privacy principles is:

  • SECURITY:¬†Organizations creating, maintaining, using or disseminating personal information must take reasonable precautions to protect it from loss, misuse and unauthorized access, disclosure, alteration and destruction.

I think the main issue here is the word¬†reasonable. Time and again it proves to be¬†very easy to illegally retrieve information from companies’ computer systems. A number of aspects makes this so easy:

  1. Forced access to systems is quite simple
  2. Data has been stored in such a way that you can easily recognize it as personal data
  3. Organizations store too much information and store it in an unprotected way

Apart from this, there are large issues with authentication dealing with most organizations. It turns out to be very easy to arrange important issues by just sending a copy of an ID, a credit card number or social security number. We are all changing our complex passwords every month, but in the meantime your system can easily be hacked by just getting access through badly protected services running in your domain, secured with default admin passwords that never get changed. We all do payments with the same 4 digit pincodes belonging to our debit or credit cards. And getting access to important information like tax records is badly secured as well, using passwords that you probably haven’t changed in years.

We are fooling ourselves with regard to authentication. The illusion of great security. The mere fact that the NSA can confiscate European data because it has been stored in US clouds is not the real problem. The fact that the NSA can so easily be hacked is the real problem. And the fact the the data is so easily accessible and understandable. Besides encryption, data should also at the very least be obfuscated.

Apart from that it is not only important to know with which cloud provider you’re doing business, but more so with which “shop”. And how much of your private information you give away. When you register yourself with a website, and that website is hosted on Amazon or Microsoft cloud servers, that is not a guarantee that your data is safe and stays private. The architecture of the web solution is of much greater interest. It can even be the case that your social security number or credit card number will be stored in a cookie, which can very easily be accessed by unauthorized persons. If the web developer has coded it like that because he thought it was a good idea, nobody will find out (in time). Only until¬†it hits the newspapers…

As a civilian you really don’t have a clue where your data is stored and how safe it is. Safe Harbor is not going to change anything about that.

Maybe the best solution will be the fact that stolen privacy sensitive information is becoming less and less expensive to buy by criminal organizations. It’s almost not worth the bother of hacking a system anymore. If¬†governments would only develop good laws to protect civilians, so that all civilians can for example have guaranteed access to health insurance against standard fees no matter how they behave according to their timeline on Facebook, privacy sensitive information is not relevant anymore. If obtaining¬†a new bank account or mortgage can only be done by means of face-to-face authentication, leaking privacy related information is not an issue anymore.

If you always have the legal right to be able to prove that you have¬†not done something, for example transferring a large amount of money somewhere, then all is alright. In today’s world, it is very easy to do that, with all the digital¬†breadcrumbs we leave behind all day.

The solution lies in immutable architecture and Big Data. If everything will be stored, using distributed systems, and the relationships between all these data can be determined on an ad hoc basis instead of creating the data models up front, the burden of proof cannot be falsified and everyone is always able to prove you have not done something.

The problem of leaking or confiscating privacy sensitive data will solve itself… Devaluation of privacy information is the answer!

Cheers, Gijs

Atomic Integration

Atomic Integration, I like the term. And I like the concept. Of course it has disadvantages, but don’t the advantages outweigh these? Let’s¬†explore.

When the Integration Monday session “The fall of the BizTalk architect” (by Mikael Sand)¬†was announced I was immediately triggered. It was the combination of the title and the presenter (Mikael often rocks)¬†that made me create a note to self to watch it later on. And finally I did. And it triggered me to write this blog post.

For years we have been telling our¬†customers that spaghetti is bad and lasagna is good. And we’ve also been telling them that re-use is the holy grail. And that by putting a service bus in the middle that everything becomes more manageable, integrations have shorter time to market and the whole application landscape becomes more reliable.

But at the same time, we see that re-use is very hard to accomplish, there are so many dependencies between solutions and that realizing this in an agile manner is a nightmare if not managed meticulously. Especially if we need to deliver business value quickly, talking about createing stuff for later re-use and having depencies on other teams is a hard sell to the product owner and thus the business.

Another thing that is really hard with any integration archtitecture and the accompanying middleware with its frameworks that have been built on top of the actual middleware (and thus becoming a part of that middleware) is ownership. And that is a two-headed beast. First of all, ownership of the frameworks that run on top of the middleware, and second the ownership of the actual integrations.

The first one is already a hard nut to crack. Who pays for maintaining these frameworks? Does it get financed by projects? Very hard to sell. Does it have a separate (business) owner and funds? Never seen that before and it probably wouldn’t work because the guy that pays most gets his way and it doesn’t necessarily mean that the framework will be the best usable framework of all times.

The second one is even harder to manage. Who is typically the owner of an integration. The subscriber and thus the receiver of information? Or is it the sender a.k.a. the publisher of the information? Who pays for it? Is that the owner? And will he be managing it? And what happens if the ingration makes use of all kinds of other, re-usable custom features that get changed over time and in which the actual owner of the ingration is not interested at all?

Why indeed not do more copy-and-paste instead of inherit or re-use? The owner of the specific integration is completely responsible for it and can change,¬†fork and¬†version whatever he likes. And as Mikael says in his presentation: TFS or Visual Studio Online is great for finding out who uses certain code and inform them if a bug has been solved in some copied code (segment). And of course we still design and build integrations according to the well known integration patterns that have become our best friends. Only, we don’t worry that much about optimization anymore, because the platform will take care of that. Just like we had to get used to garbage collectors and developers not knowing what memfree() actually means, we need to get used to computing power and cheap storage galoring and therefore don’t need to bother about redundancy in certain steps in an integration anymore.

With the arrival of cloud computing (more specifically PaaS)¬†and big data, I think we now get into an era when this actually is becoming possible and at the same time manageable. The PaaS offerings by Microsoft, specifically Azure App Service,¬†are becoming an interesting environment quickly. Combined with big data (which to me in this scenario is: Just save a much information about integration runs as possible, because we have cheap storage anyway and we’ll see what we’ll do with this data later), runtime insight, correlation¬†and debugging capabilities are a breeze. Runtime governance: Check.¬†We don’t need frameworks anymore and thus we don’t need an owner anymore.

Azure App Service, in combination with the big data facilities and the analytics services are all we need. And it is a perfect fit for Atomic Integration.

Cheers, Gijs