Integration is just one of the skills needed

In modern enterprises, business solutions are built by agile teams. Agile teams are by design multi-disciplinary. The Product Owner is responsible for the product backlog and the team(s) build and implement the stuff needed. In these modern enterprises, the need for an innovation & differentiation layer on top of systems of record is an absolute necessity to enable shorter time-to-market of these solutions and, in some cases, digital transformation.

In the Microsoft world this innovation & differentiation layer is basically provided by Microsoft Azure and Office 365. The first one is needed for the (big) data, business intelligence, integration, process execution & monitoring and web & mobile UX capabilities, and the latter one for the collaboration and document handling capabilities. In Microsoft-centric application landscapes, Dynamics 365 will be the way to go for your systems of record for customer interaction (CRM) and resource planning (ERP). In the future, whenever you’re ready as an enterprise, the whole two-speed or bi-modal approach will be a thing of the past, and the cloud infrastructure will enable just one-speed IT; full speed! But that will still take several years before it has become a reality that most enterprises can deal with. Because it’s not only an IT thing, but more an organizational thing (how do you keep on adopting these agile solutions; won’t we become tech-fatigue?).

Many skills are needed in the agile teams we deploy today. And when scaling the teams, a layer on top of the teams is needed to manage the portfolio and program aspects in a better way. The Scaled Agile Framework (SAFe) is a good example (I personally have experience with; I’m a SAFe Agilist :-)). Quite some enterprises have implemented this, or an alternative to this like LeSS (Large-Scale Scrum). This also facilitates DevOps at a larger scale.

Even in agile environments, enterprise architecture is still needed 😉 He or she is responsible for the overall architecture of the solutions that get built. Business architecture, information architecture and technical architecture are all still very important things. It defines the frameworks within which the solutions should be built. The agile teams work within these frameworks.

We also more and more see that agile teams focus on certain business domains. And that common practice within microservices architecture is that we don’t try to build stuff that can be reused by other teams as well. Unless we have one or two specific teams working on re-usable, enterprise wide functionality. It’s all about business value first.

Is the integration competence center something that is helpful in such an environment? Well, the role of the ICC will change. The ICC (just like any other CC) will not be involved as a “sleeping policeman” (pun intended) anymore. The ICC will basically be part of the enterprise architecture role, focusing specifically on the frameworks (to which the agile teams should adhere; comply or explain) and the enterprise wide functionality. Everything else will be solved by the agile teams. For specific domains. That way we are way more flexible and we still build re-usable enterprise wide stuff where absolutely needed.

Do integration projects still exist in the future?

I think not, or only in a very limited number of cases. Integration is just part of business solutions and should also be treated like that. The API developer, the UX developer, the DBA, the security expert, etc. They’re all integration capable team members. The commodity integration needs can be fully handled by the agile teams like that. And for the integration specials, like re-usable building blocks and patterns or the more difficult one-offs, we just involve the ICC, which probably is a separate agile team in the larger organizations. Their domain is cross enterprise. But they will not be roadblocks anymore, that used to slow down business solution development.

Will this change the way system integrators work? Absolutely. The more we can do to deliver complete agile teams (including big data, UX and collaboration folks), the better we can serve our customers! To become agile too, and shorten their time-to-market and maybe even transform and redefine their business models. As an integration specialist or integration team alone, you can’t do that. The folks that understand and can implement the whole Microsoft platform to deliver real business solutions are going to be the ones that enterprises will turn to…

Cheers, Gijs

The business value of Microsoft Azure Logic Apps

I’ve written a paper on the business value of Microsoft Azure Logic Apps for Integration. Mainly useful for CIO’s and IT managers considering Azure PaaS services for their Integration needs.

It describes the components of Microsoft Azure, and then drills down into aPaaS and iPaaS to position Logic Apps, API Apps and API Management. Furthermore it describes common integration needs in complex application landscapes such as keeping data in sync, creating composite apps, involving supply and demand chains and integrating apps and portals.

Next it describes the real business value, so that you can explain it to your business stakeholders as well. This includes creating value add on top of commodity SaaS apps, leveraging investments in legacy applications (your Systems of Record), decreasing time-to-market, channel renewal, agility: basically digital transformation using Gartner’s pace-layer application model.

Lastly it describes the different integration themes that Logic Apps can help you with.

Enjoy!

Please share as you like.

Cheers, Gijs

Atomic Integration

Atomic Integration, I like the term. And I like the concept. Of course it has disadvantages, but don’t the advantages outweigh these? Let’s explore.

When the Integration Monday session “The fall of the BizTalk architect” (by Mikael Sand) was announced I was immediately triggered. It was the combination of the title and the presenter (Mikael often rocks) that made me create a note to self to watch it later on. And finally I did. And it triggered me to write this blog post.

For years we have been telling our customers that spaghetti is bad and lasagna is good. And we’ve also been telling them that re-use is the holy grail. And that by putting a service bus in the middle that everything becomes more manageable, integrations have shorter time to market and the whole application landscape becomes more reliable.

But at the same time, we see that re-use is very hard to accomplish, there are so many dependencies between solutions and that realizing this in an agile manner is a nightmare if not managed meticulously. Especially if we need to deliver business value quickly, talking about createing stuff for later re-use and having depencies on other teams is a hard sell to the product owner and thus the business.

Another thing that is really hard with any integration archtitecture and the accompanying middleware with its frameworks that have been built on top of the actual middleware (and thus becoming a part of that middleware) is ownership. And that is a two-headed beast. First of all, ownership of the frameworks that run on top of the middleware, and second the ownership of the actual integrations.

The first one is already a hard nut to crack. Who pays for maintaining these frameworks? Does it get financed by projects? Very hard to sell. Does it have a separate (business) owner and funds? Never seen that before and it probably wouldn’t work because the guy that pays most gets his way and it doesn’t necessarily mean that the framework will be the best usable framework of all times.

The second one is even harder to manage. Who is typically the owner of an integration. The subscriber and thus the receiver of information? Or is it the sender a.k.a. the publisher of the information? Who pays for it? Is that the owner? And will he be managing it? And what happens if the ingration makes use of all kinds of other, re-usable custom features that get changed over time and in which the actual owner of the ingration is not interested at all?

Why indeed not do more copy-and-paste instead of inherit or re-use? The owner of the specific integration is completely responsible for it and can change, fork and version whatever he likes. And as Mikael says in his presentation: TFS or Visual Studio Online is great for finding out who uses certain code and inform them if a bug has been solved in some copied code (segment). And of course we still design and build integrations according to the well known integration patterns that have become our best friends. Only, we don’t worry that much about optimization anymore, because the platform will take care of that. Just like we had to get used to garbage collectors and developers not knowing what memfree() actually means, we need to get used to computing power and cheap storage galoring and therefore don’t need to bother about redundancy in certain steps in an integration anymore.

With the arrival of cloud computing (more specifically PaaS) and big data, I think we now get into an era when this actually is becoming possible and at the same time manageable. The PaaS offerings by Microsoft, specifically Azure App Service, are becoming an interesting environment quickly. Combined with big data (which to me in this scenario is: Just save a much information about integration runs as possible, because we have cheap storage anyway and we’ll see what we’ll do with this data later), runtime insight, correlation and debugging capabilities are a breeze. Runtime governance: Check. We don’t need frameworks anymore and thus we don’t need an owner anymore.

Azure App Service, in combination with the big data facilities and the analytics services are all we need. And it is a perfect fit for Atomic Integration.

Cheers, Gijs

Is NoESB a euphemism for “more code in apps”?

Yesterday I tweeted the title of this blog post. But at that moment I didn’t know yet that this was becoming the title of a blog post. And now it is :-).

I recommended the use of an ESB to a customer. Oh dear. Before I started working on that recommendation I organized workshops with all kinds of folks within that organization and thoroughly analyzed their application landscape, information flows, functional and non-functional requirements. The CIO had told me at the start that he was going to have the report reviewed by a well known research firm. And so happened.

The well known research firm came back with a couple of comments. None too alarming. Except for one: the advice I gave, to implement an ESB was “old fashioned” because ESBs are overly complex centralized solutions and therefore we should consider using API Management. Whaaaaaat?

First of all, let me explain that I have found out that when we talk about an ESB, quite some people automatically think that that is about on-premises software. That’s a wrong assumption. ESBs can run in the cloud as well. As a PaaS solution indeed.

Second, an ESB can also be used to integrate external apps and not only apps that are within your own “enterprise”.

Third, API Management is not an integration tool. With regard to integration capabilities, it’s merely a mediation tool. With strong virtualization, management and monitoring facilities, for both admins and developers. Developers that use your APIs to create new apps.

The well known research firm also said that NoESB is a new thing that should be considered.

I think that we are just moving the complexity elsewhere. And the bad thing is that we are moving it in places (mostly apps) that are developed by people who don’t understand integration and everything that comes with it. People who have issues with understanding design patterns most of the times, let alone integration patterns. People who most of the times don’t have a clue about manageability and supportability. People who often say things like “I didn’t expect that many items to be in that list.” or “What do you reckon, he actually uses this app with mimimum rights. We haven’t tested that!” or “A 1000 should be enough.”. Famous last words.

Of course the ESB has caused lots of troubles in the past and it lots of times still does. But that’s not the fault of the ESB. That’s the fault of the people implementing the ESB, using the wrong design patterns. Or using ESB technologies that do not (fully) support design patterns. Or ESBs that are misued for stuff that should have been implemented with other technologies and just bridged using federation.

I’m a bit worried about the new cloud offerings currently becoming popular rapidly, which are still in beta or even alpha stage. Cloud services that are offered for integration purposes, which can easily be misused to create horrific solutions. Not adhering to any design pattern at all. And not guiding the powerusers (DevOps) in applying them either. Do DevOps even know about design patterns? (sorry if I offend anyone here).

I’m afraid that in the near term, lots of new distributed apps will be created with brand new, alpha or beta stage technologies by people who are neither developers, nor operations folks. People who know little about architecture and patterns and who will create fat apps with lots of composite logic. And integrate with NoESB. Disaster is coming soon. Who’ll have to clean up that mess? Who you gonna call? FatApp Busters! 🙂

Cheers and happy hacking, Gijs

Integration in the SaaS and API era

A lot is happening in the IT space these days. Moore’s law is outdated, not relevant anymore. We need a new law. Gijs’ law (LOL) would sound something like this:

“IT paradigms will change every other day, but the underlying principles will remain more or less the same. Unfortunately we forget that each and every time.”.

So many new initiatives happen today and can become successful so quickly, that large IT companies with lots of smart developers, patents and market share can become irrelevant within months or even weeks.

Especially when it’s about social media tooling, market shares can very quickly become significant and a startup with 5 developers and a savvy marketing / business development person can beat billion dollar software companies so fast that they don’t even have a clue what’s happening to them.

Most of this is happening in the consumer space so far. But is the same going to happen in the enterprise space as well? Will that be possible? Or are enterprises more conservative?

Have a look at ifttt.com, zapier.com and appmachine.com. Just a couple of examples of smart initiatives that quickly become popular and even leaders in their space. Attracting loads of smart investor money in some cases. And lots of users. Most of the times, inventions and innovations are just smart combinations of already existing stuff. Just nobody cared enough or was smart enough to combine them until just then.

The paradigms used by these startups are very interesting not only for consumers but also for businesses. Consumerization of IT can actually happen in more places (not only the device or app you use) than we can imagine.

Isn’t for example IFTTT an interesting concept for automating business integration? “If my post gets a Like then do add this person to my CRM”. “If new person added to my CRM then do send him the latest newsletter.”. “If temperature is trending up then do adjust airco setting.”. Just some examples.

What these kind of consumer products miss is the mission critical “stamp”. But on the other hand, if the service doesn’t provide the right service levels, consumers will go elsewhere (and take the migration effort for granted). They *have* to provide the right service levels. Not matter what tooling they use. Otherwise they’ll loose their market share and become irrelevant quickly.

For enterprise users it is different. They need to know up front what kind of service levels are provided. Otherwise they don’t sign up for the service. No risks can be taken. And if they get dissappointed a couple of times, they get mad with the vendor. And maybe get a couple dollars compensation. But it’s a hell of a job to “exit” from that vendor’s solutions and services most of the times. Especially if the underlying products are not based on open standards. Migration is most often a nightmare.

The larger vendors will problaby simply take over some of these new technology companies (for billions of dollars) and integrate them in their software stack. In that way quickly adapting to the changing market needs and giving the products the “enterprise treatment”. The same will probably happen in the API, SaaS and Micrososervices integration space. Some of these new and real easy to use consumer SaaS solutions will quickly make it to the enterprise world. That’s my vision at least.

What we should not forget though, is that things like mangeability, auditability, supportability are always going to be important in the enterprise market. The problem is that new products *never* start with these pain-in-the-neck “-ability” requirements in mind. And that not only applies to consumer products but also to enterprise products, unfortunately! Larger software and SaaS vendors are shamelessly releasing services into production, without having catered (enough) for these “-abilities”. And what also remains important, no matter what new IT paradigm we come up with: the underlying principles remain the same. When exposing an API or Microservice, make sure it ticks all the known and proven service design check boxes. That way the chance that end-to-end solutions built with those services or APIs will actually become manageable, auditable and supportable are much bigger.

Why do we each and every time forget about these simple software design principles and standard patterns? Developers are either too lazy (the happy flow works; check-in; done) or too modest (nobody’s going to use this piece of software). Maybe it helps if the folks creating and posting sample code actually post enterprise-grade samples.  None of this is of course gooing to change, and hence my law will remain applicable to infinity and beyond. My endless loop, free of charge 🙂

Cheers, Gijs

IoT as the big driver of Microservices

The Internet of Things (IoT) market will grow tremendously in the coming years. The quantity and diversity of devices that are connected to the cloud and that will spit out data will explode. The generated data will have to be analyzed and these analyses will be used to optimize business processes. New architectures will be needed to accommodate all this and Microservices architecture is probably the way to go and here to stay.

The big hype around IoT has just started and we can expect that the largest bulk of IT investments will be done in this area. Who is not interested in optimizing processes and increasing the margins on delivered products and services? Granular insight in what exactly happens in the complete production- or services chain and being able to predict as reliably as possible what will happen in the (near) future is worth a lot. We’ve got gold in our hands.

In an IoT architecture, the following components are important:

  • The Things; the devices that are connected to the cloud that, without human intervention, can generate (mostly) sensor data and which in some cases can handle instructions to take certain actions.
  • The (cloud) infrastructure and services to execute the following actions:
    • Receiving large amounts of data, generated by the Things.
    • Storing these data.
    • Analyzing these data in real-time, but also by means of predictive analysis.
    • Generating “business events” as a result of these analyses.
    • Orchestrating these business events by means of integration logic.
    • Potentially send back data (instructions) to the Things, so that they can take some kind of action.
  • The (cloud) applications that can handle the business events and can in their turn generate new business events.

The Things are most of the times small machines with their own processor, operating system and sensors and contain some (hard-wired) logic. They can be communicated with by means of lightweight APIs. They are in fact Microservices. Because they are completely autonomous, run in their own process space and deliver a small service. A Thing is the ultimate incarnation of a Microservice!

In order to be able to process the enormous amounts of data at high speed and efficiently, we need an architecture that supports lightweight mediation; capable of distributing the generated events to the right receivers with as little overhead as possible. Such a receiver can for example be a NoSQL database, but also a machine learning service. The NoSQL database can store events for later processing and distribute them asynchronously to other interested parties. This enables offline analysis. The machine learning service can, by means of pre-defined models, determine if certain actions should be taken based on the current event and previous related events. This will then result in a business event. Which in its turn will be published to the lightweight mediation layer which will take care of it. All of this is very time critical and has to happen as soon as possible. Because the next events to be processed are already arriving.

The generated business event will have to process further by the orchestration layer. This may involve multiple business applications and services. This can no longer be called lightweight mediation. We’re talking service bus or application integration middleware here. Time criticality is less important here. What is important in this layer though is guaranteed delivery and transactionality. Traceability is also of high importance in this layer. This orchestration layer is something we most of the times have in place already. Think of traditional integration middleware such as Microsoft BizTalk Server, Tibco or IBM WebSphere Message Broker (and of course many more). The orchestration layer can be hosted in the cloud or, much more still the case, on-premises.

Lightweight mediation all revolves around the following tasks:

  • Lightweight transformation (for example JSON to XML and vice versa).
  • Routing.
  • Applying (security) policies.
  • Lightweight service composition.

In my (humble, ahum) opinion, the lightweight meditation layer is not a layer where you handle (or even need) things like batching, heavy XSLT transformations and long running transactions. In this layer, we should only think about Microservices. Microservices that are indeed lightweight, single task entities that can be composed into solutions by means of lightweight composition.

I foresee a hybrid integration world that is here to stay. Not because EAI can only be hosted on-premises or because we want to run integration solutions in the cloud so desperately, but because cloud integration in an IoT world is about Microservices and lightweight mediation. And this realm needs to be coupled to (often on-premises) infrastructure by means of transactions and orchestrations. Hybrid integration, in my opinion, is about connecting a Microservices architecture to traditional integration middleware, no matter where the latter is hosted. The big challenge with this is to provide for end-to-end run- and design time governance. We’re certainly going to need that.

IoT is the big drivers for Microservices and integration specialists will have to get to know both integration realms and be able to apply the right (combination of) patterns by means of the right (hybrid) technologies. This is a great era to live in, for us integration folks!

Cheers, Gijs

BizTalk Server, Microservices & APIs

This week I attended Integrate 2014 (a.k.a. The BizTalk Summit). It was great meeting up again with lots of folks I’ve got to know in the past 14 years (veterans like Tom Canter, Brian Loesgen, Bill Chesnut, Richard Broida and the “younger generation” like Mikael Sand, Saravana Kumar, Michael Stephenson, Kent Weare and many more, including lots of Dutchies and Belgians), since starting to work with BizTalk Server in 2000. It was also nice to meet some of the Microsoft product team architects (online and offline, Guru Venkataraman and, offline Evgeny Popov) getting to understand their vision.

What keeps on surprising me however, is how bad Microsoft is at positioning their great stuff and how good they are at confusing us all, including the integration specialists and customers. The community will need to help them with bringing the right message. Customers deserve this. They need to be comforted. After all, they spend hundreds of thousands or even millions of dollars and euros on implementing, supporting and migrating their middleware solutions and their businesses are relying on it.

I think it’s quite simple. In integration, we have a couple of things to take care of, now and in the future:

  • Message based A2A integration
  • Message based B2B integration
  • API based B2B and B2C integration
  • The underlying SOA architecture. Because folks, that’s what it is. At least the SOA as I have always understood and applied it.

Thinking in (orchestrated) task, entity and utility services is crucial. How to expose and compose these services and how granular they are (micro, nano or macro), is basically not that important. But it depends on the consumers of these services.

BizTalk Server is integration middleware that you can use to create an on-prem SOA architecture and use for message based A2A integration. It can also take care of “traditional”, message based B2B integration. We use SOAP, flat files, EDI and XML to exchange stuff in a very reliable and manageable way. We include MDM and other data services to take care of (master) data. And in order to expose (business) interfaces as APIs, we use the REST/JSON adapter. That way, creating hybrid buses using a federated bus architecture.

BizTalk Services (MABS) is the newer, cloud based integration offering. It is basically a B2B (EDI) gateway. A very logical place to have that in the cloud, since the cloud is a big DMZ and B2B is about structured communication with customers and partners outside your own firewalls. It’s a pitty though that this has not built on further since its first release 2 years ago (but we now know why: the team has shifted focus to the new microservices architecture and tooling).

Now we have two more things: APIs and Microservices.

APIs expose your business services to Apps and Portals  in a managed way. Mashing up services. With granular security. And you can even monitize them that way, exposing them to affiliates and other partners. For an indepth (IMO) view of API management and how it relates to this, please check this article.

Microservices are there to easily create solutions by combining functionalities exposed by these services. But again, these services are juse plain old utility, entity and task services. But now exposed as lightweight JSON/REST. And probably more granular. So they can more easily be versioned and deployed independently. Check out Martin Fowler’s vision on microservices here (also read his comment on SOA vs. Microservices). They can also be exposed through a gateway. See this article.

Service (micro, nano or macro, don’t care) composition is something you can do by means of integration patterns, implemented through some sort of pre-defined proces, being an itinerary, orchestration or workflow (but probably in the not to near future, composition is also something that is taken care of by using NoProcess, based on AI: the system just automatically finds out what to do best and which services it will need doing that, and will constantly finetune that automatically).

What’s going to make or break your (hybrid) architecture is design- and runtime governance of these services. In order to be able to compose services into solutions, you’ll have to know what services are available and how you can use them. In order to be able to manage and monitor your solutions, they have to be instrumented in a standard way, feeding management and monitoring solutions in a real-time fashion. API Management (by Azure API Management and Nevatech Sentinet) is doing that for APIs. SOA governance (by Nevatech Sentinet and SOA Software) is doing that for integration middleware. These somehow have to be tied together to provide end-to-end runtime- and design time governance (including ALM), tying in with your development environment. If I were Microsoft, I’d just buy the best solution and integrate it in the platform. Just like they did with Apiphany.

Now let’s hope that Microsoft:

  • Keeps the BizTalk Server brand for on-prem A2A and B2B integration and building a SOA.
  • Keeps the BizTalk Services brand for cloud based B2B integration.
  • Comes up with a new brand for the cloud based (micro)services architecture and tooling.
  • Versions and roadmaps these offerings separately
  • Clearly depicts usage scenarios, including hybrid solutions. A couple of good architecture pictures should suffice.

On a final note, check out Next Generation SOA. Thomas Erl and his co-writers have done a great job on simply explaining it and it all fully applies to modern architectures including APIs and microservices. It’ll really help you think clearly about complex, services based solutions.

Cheers, Gijs