Safe Harbor flip-thinking

European privacy laws turned out to be incompatible with the laws in the United States. Safe Harbor can therefore not be applied in European countries and now us Europeans are all obliged to move our data to European clouds. Aren’t we forgetting about the real important stuff here? Aren’t there more important aspects to privacy related issues? Such as identity theft?

Personal information gets stolen everyday and the incidents we hear about in the news are getting bigger and bigger. Millions of records are stolen each day. At the moment the amount of hacked data is so enormous, that such stolen information is only worth about $1 per person. And in the meantime we worry about Safe Harbor…

One of the Safe Harbor privacy principles is:

  • SECURITY: Organizations creating, maintaining, using or disseminating personal information must take reasonable precautions to protect it from loss, misuse and unauthorized access, disclosure, alteration and destruction.

I think the main issue here is the word reasonable. Time and again it proves to be very easy to illegally retrieve information from companies’ computer systems. A number of aspects makes this so easy:

  1. Forced access to systems is quite simple
  2. Data has been stored in such a way that you can easily recognize it as personal data
  3. Organizations store too much information and store it in an unprotected way

Apart from this, there are large issues with authentication dealing with most organizations. It turns out to be very easy to arrange important issues by just sending a copy of an ID, a credit card number or social security number. We are all changing our complex passwords every month, but in the meantime your system can easily be hacked by just getting access through badly protected services running in your domain, secured with default admin passwords that never get changed. We all do payments with the same 4 digit pincodes belonging to our debit or credit cards. And getting access to important information like tax records is badly secured as well, using passwords that you probably haven’t changed in years.

We are fooling ourselves with regard to authentication. The illusion of great security. The mere fact that the NSA can confiscate European data because it has been stored in US clouds is not the real problem. The fact that the NSA can so easily be hacked is the real problem. And the fact the the data is so easily accessible and understandable. Besides encryption, data should also at the very least be obfuscated.

Apart from that it is not only important to know with which cloud provider you’re doing business, but more so with which “shop”. And how much of your private information you give away. When you register yourself with a website, and that website is hosted on Amazon or Microsoft cloud servers, that is not a guarantee that your data is safe and stays private. The architecture of the web solution is of much greater interest. It can even be the case that your social security number or credit card number will be stored in a cookie, which can very easily be accessed by unauthorized persons. If the web developer has coded it like that because he thought it was a good idea, nobody will find out (in time). Only until it hits the newspapers…

As a civilian you really don’t have a clue where your data is stored and how safe it is. Safe Harbor is not going to change anything about that.

Maybe the best solution will be the fact that stolen privacy sensitive information is becoming less and less expensive to buy by criminal organizations. It’s almost not worth the bother of hacking a system anymore. If governments would only develop good laws to protect civilians, so that all civilians can for example have guaranteed access to health insurance against standard fees no matter how they behave according to their timeline on Facebook, privacy sensitive information is not relevant anymore. If obtaining a new bank account or mortgage can only be done by means of face-to-face authentication, leaking privacy related information is not an issue anymore.

If you always have the legal right to be able to prove that you have not done something, for example transferring a large amount of money somewhere, then all is alright. In today’s world, it is very easy to do that, with all the digital breadcrumbs we leave behind all day.

The solution lies in immutable architecture and Big Data. If everything will be stored, using distributed systems, and the relationships between all these data can be determined on an ad hoc basis instead of creating the data models up front, the burden of proof cannot be falsified and everyone is always able to prove you have not done something.

The problem of leaking or confiscating privacy sensitive data will solve itself… Devaluation of privacy information is the answer!

Cheers, Gijs

Atomic Integration

Atomic Integration, I like the term. And I like the concept. Of course it has disadvantages, but don’t the advantages outweigh these? Let’s explore.

When the Integration Monday session “The fall of the BizTalk architect” (by Mikael Sand) was announced I was immediately triggered. It was the combination of the title and the presenter (Mikael often rocks) that made me create a note to self to watch it later on. And finally I did. And it triggered me to write this blog post.

For years we have been telling our customers that spaghetti is bad and lasagna is good. And we’ve also been telling them that re-use is the holy grail. And that by putting a service bus in the middle that everything becomes more manageable, integrations have shorter time to market and the whole application landscape becomes more reliable.

But at the same time, we see that re-use is very hard to accomplish, there are so many dependencies between solutions and that realizing this in an agile manner is a nightmare if not managed meticulously. Especially if we need to deliver business value quickly, talking about createing stuff for later re-use and having depencies on other teams is a hard sell to the product owner and thus the business.

Another thing that is really hard with any integration archtitecture and the accompanying middleware with its frameworks that have been built on top of the actual middleware (and thus becoming a part of that middleware) is ownership. And that is a two-headed beast. First of all, ownership of the frameworks that run on top of the middleware, and second the ownership of the actual integrations.

The first one is already a hard nut to crack. Who pays for maintaining these frameworks? Does it get financed by projects? Very hard to sell. Does it have a separate (business) owner and funds? Never seen that before and it probably wouldn’t work because the guy that pays most gets his way and it doesn’t necessarily mean that the framework will be the best usable framework of all times.

The second one is even harder to manage. Who is typically the owner of an integration. The subscriber and thus the receiver of information? Or is it the sender a.k.a. the publisher of the information? Who pays for it? Is that the owner? And will he be managing it? And what happens if the ingration makes use of all kinds of other, re-usable custom features that get changed over time and in which the actual owner of the ingration is not interested at all?

Why indeed not do more copy-and-paste instead of inherit or re-use? The owner of the specific integration is completely responsible for it and can change, fork and version whatever he likes. And as Mikael says in his presentation: TFS or Visual Studio Online is great for finding out who uses certain code and inform them if a bug has been solved in some copied code (segment). And of course we still design and build integrations according to the well known integration patterns that have become our best friends. Only, we don’t worry that much about optimization anymore, because the platform will take care of that. Just like we had to get used to garbage collectors and developers not knowing what memfree() actually means, we need to get used to computing power and cheap storage galoring and therefore don’t need to bother about redundancy in certain steps in an integration anymore.

With the arrival of cloud computing (more specifically PaaS) and big data, I think we now get into an era when this actually is becoming possible and at the same time manageable. The PaaS offerings by Microsoft, specifically Azure App Service, are becoming an interesting environment quickly. Combined with big data (which to me in this scenario is: Just save a much information about integration runs as possible, because we have cheap storage anyway and we’ll see what we’ll do with this data later), runtime insight, correlation and debugging capabilities are a breeze. Runtime governance: Check. We don’t need frameworks anymore and thus we don’t need an owner anymore.

Azure App Service, in combination with the big data facilities and the analytics services are all we need. And it is a perfect fit for Atomic Integration.

Cheers, Gijs

Is Azure App Service about Integration?

Today and tomorrow is the BizTalk Summit in London (#BizTalkSummit2015). BizTalk is about integration. The summit is about the launch of BizTalk Services as part of the Azure App Service offering. Since the first talks about offering a Microservices platform during the Integration Summit in Redmond in December last year, I’ve started thinking about and struggeling with integration versus application development. There has traditionally always been a fine line between application development and integration. When implementing pure, messaging only based transformation, routing and protocol adaptation, we’re basically talking about integration. This is used to convey information from one application to the other in order for them to work with less or completely without manual copying of information. When adding orchestration and business rules, master data services, apps and portals access we’re more and more talking about application development. The application mostly being a composition of functions provided by silo’d application and SOA services. And sometimes adding more modern stuff like REST services to the composition.

Gartner has released its latest Magic Quadrant on Enterprise Application Platform as a Service earlier this year. To them, Azure App Service is (part of) an aPaaS platform. They define aPaaS as:

“Application infrastructure functionality, enriched with cloud characteristics and offered as a service, is platform as a service (PaaS). Gartner refers to it more precisely as cloud application infrastructure services. Application platform as a service (aPaaS) is a form of PaaS that provides a platform to support application development, deployment and execution in the cloud. It is a suite of cloud services designed to meet the prevailing application design requirements of the time, and, in 2015, includes mobile, cloud, the Internet of Things (IoT) and big data analytics innovations.”

Especially the part “a platform to support application development, deployment and execution in the cloud” triggered me to write this blog post. When looking at the functionalities that should be provided by such platforms, integration is of course one of the capabilities. And integration comes in many forms. But, are these capabilities features that should be used to recreate legacy ways of application integration? Or are these functionalities provided to build moderns applications, based on – yes I dare to use the term again, even though Microsoft stopped using it since last month when talking about the Azure App Service – Microservices?

To me it is clear, Azure App Sevice is an application platform providing the tools and runtime services to quickly build modern, distributed solutions. Modern, because it uses APIs. APIs should be used, because Data Services, 3rd party APIs and SaaS Applications are built on that paradigm. And that paradigm facilitates agile solution development and quick time to market. Creating a modern, distributed, composite solution is using all the different building blocks provided by the aPaaS platform and the SaaS applications, Data Services and 3rd party APIs (for example the ones listed in programmableweb.com or the Azure Marketplace).

In my opinion, we should only use public cloud based legacy integration paradigms for hybrid purposes. That is, bridging on-prem soap services to cloud based API integration layers. We should not bother about providing traditional EAI and B2B like integration paradigms. Forget about Batching EDI messages, transaction scopes and stuff like that. I’d say, just boycot that in the public cloud. Just rely on idempotency of the independently deployed APIs. And provide an on-premises server solution for the legacy integration stuff. We shouldn’t be building ESBs in aPaaS. We should be building distributed solutions based on modern paradigms.

Let’s use this hammer (or nail gun, as Mikael Sand called it :-)) to handle nails. And let’s all figure out quickly what these nails can look like and what we can build with them. Maybe we should first find consensus on anti-patterns for Azure App Service within the large community we have.

Cheers, Gijs

Is NoESB a euphemism for “more code in apps”?

Yesterday I tweeted the title of this blog post. But at that moment I didn’t know yet that this was becoming the title of a blog post. And now it is :-).

I recommended the use of an ESB to a customer. Oh dear. Before I started working on that recommendation I organized workshops with all kinds of folks within that organization and thoroughly analyzed their application landscape, information flows, functional and non-functional requirements. The CIO had told me at the start that he was going to have the report reviewed by a well known research firm. And so happened.

The well known research firm came back with a couple of comments. None too alarming. Except for one: the advice I gave, to implement an ESB was “old fashioned” because ESBs are overly complex centralized solutions and therefore we should consider using API Management. Whaaaaaat?

First of all, let me explain that I have found out that when we talk about an ESB, quite some people automatically think that that is about on-premises software. That’s a wrong assumption. ESBs can run in the cloud as well. As a PaaS solution indeed.

Second, an ESB can also be used to integrate external apps and not only apps that are within your own “enterprise”.

Third, API Management is not an integration tool. With regard to integration capabilities, it’s merely a mediation tool. With strong virtualization, management and monitoring facilities, for both admins and developers. Developers that use your APIs to create new apps.

The well known research firm also said that NoESB is a new thing that should be considered.

I think that we are just moving the complexity elsewhere. And the bad thing is that we are moving it in places (mostly apps) that are developed by people who don’t understand integration and everything that comes with it. People who have issues with understanding design patterns most of the times, let alone integration patterns. People who most of the times don’t have a clue about manageability and supportability. People who often say things like “I didn’t expect that many items to be in that list.” or “What do you reckon, he actually uses this app with mimimum rights. We haven’t tested that!” or “A 1000 should be enough.”. Famous last words.

Of course the ESB has caused lots of troubles in the past and it lots of times still does. But that’s not the fault of the ESB. That’s the fault of the people implementing the ESB, using the wrong design patterns. Or using ESB technologies that do not (fully) support design patterns. Or ESBs that are misued for stuff that should have been implemented with other technologies and just bridged using federation.

I’m a bit worried about the new cloud offerings currently becoming popular rapidly, which are still in beta or even alpha stage. Cloud services that are offered for integration purposes, which can easily be misused to create horrific solutions. Not adhering to any design pattern at all. And not guiding the powerusers (DevOps) in applying them either. Do DevOps even know about design patterns? (sorry if I offend anyone here).

I’m afraid that in the near term, lots of new distributed apps will be created with brand new, alpha or beta stage technologies by people who are neither developers, nor operations folks. People who know little about architecture and patterns and who will create fat apps with lots of composite logic. And integrate with NoESB. Disaster is coming soon. Who’ll have to clean up that mess? Who you gonna call? FatApp Busters! 🙂

Cheers and happy hacking, Gijs

Integration in the SaaS and API era

A lot is happening in the IT space these days. Moore’s law is outdated, not relevant anymore. We need a new law. Gijs’ law (LOL) would sound something like this:

“IT paradigms will change every other day, but the underlying principles will remain more or less the same. Unfortunately we forget that each and every time.”.

So many new initiatives happen today and can become successful so quickly, that large IT companies with lots of smart developers, patents and market share can become irrelevant within months or even weeks.

Especially when it’s about social media tooling, market shares can very quickly become significant and a startup with 5 developers and a savvy marketing / business development person can beat billion dollar software companies so fast that they don’t even have a clue what’s happening to them.

Most of this is happening in the consumer space so far. But is the same going to happen in the enterprise space as well? Will that be possible? Or are enterprises more conservative?

Have a look at ifttt.com, zapier.com and appmachine.com. Just a couple of examples of smart initiatives that quickly become popular and even leaders in their space. Attracting loads of smart investor money in some cases. And lots of users. Most of the times, inventions and innovations are just smart combinations of already existing stuff. Just nobody cared enough or was smart enough to combine them until just then.

The paradigms used by these startups are very interesting not only for consumers but also for businesses. Consumerization of IT can actually happen in more places (not only the device or app you use) than we can imagine.

Isn’t for example IFTTT an interesting concept for automating business integration? “If my post gets a Like then do add this person to my CRM”. “If new person added to my CRM then do send him the latest newsletter.”. “If temperature is trending up then do adjust airco setting.”. Just some examples.

What these kind of consumer products miss is the mission critical “stamp”. But on the other hand, if the service doesn’t provide the right service levels, consumers will go elsewhere (and take the migration effort for granted). They *have* to provide the right service levels. Not matter what tooling they use. Otherwise they’ll loose their market share and become irrelevant quickly.

For enterprise users it is different. They need to know up front what kind of service levels are provided. Otherwise they don’t sign up for the service. No risks can be taken. And if they get dissappointed a couple of times, they get mad with the vendor. And maybe get a couple dollars compensation. But it’s a hell of a job to “exit” from that vendor’s solutions and services most of the times. Especially if the underlying products are not based on open standards. Migration is most often a nightmare.

The larger vendors will problaby simply take over some of these new technology companies (for billions of dollars) and integrate them in their software stack. In that way quickly adapting to the changing market needs and giving the products the “enterprise treatment”. The same will probably happen in the API, SaaS and Micrososervices integration space. Some of these new and real easy to use consumer SaaS solutions will quickly make it to the enterprise world. That’s my vision at least.

What we should not forget though, is that things like mangeability, auditability, supportability are always going to be important in the enterprise market. The problem is that new products *never* start with these pain-in-the-neck “-ability” requirements in mind. And that not only applies to consumer products but also to enterprise products, unfortunately! Larger software and SaaS vendors are shamelessly releasing services into production, without having catered (enough) for these “-abilities”. And what also remains important, no matter what new IT paradigm we come up with: the underlying principles remain the same. When exposing an API or Microservice, make sure it ticks all the known and proven service design check boxes. That way the chance that end-to-end solutions built with those services or APIs will actually become manageable, auditable and supportable are much bigger.

Why do we each and every time forget about these simple software design principles and standard patterns? Developers are either too lazy (the happy flow works; check-in; done) or too modest (nobody’s going to use this piece of software). Maybe it helps if the folks creating and posting sample code actually post enterprise-grade samples.  None of this is of course gooing to change, and hence my law will remain applicable to infinity and beyond. My endless loop, free of charge 🙂

Cheers, Gijs

On the desperate need for (micro)services governance

Am I becoming an old integration cynic (or a Happy Camper as @mikaelsand so sarcastically called me a couple of weeks ago during an #IntegrationMonday)? I don’t think so, but correct me if I’m wrong, please!

My current view on the latest Microservices platform craze is the following: Old wine in new bottles. Yeah, yeah, yeah, we can now run on the latest SQL Server (but not SQL Azure and not AlwaysOn) and run things in different containers. And we have the 3rd or 4th incarnation of a business rules composer that does exactly the same but in a new frame. And yet another way of composing services and building transformations. And we store and exchange things using JSON. Great! 😉 But to me, this all still falls in the category “platform alignment” or “platform modernization”. This is not innovation. This is not adding great features that help our customers solve real life issues in a much better way. This is like adding airbags and anti-lock brakes (that any car has today), but not building a car that’s 60% software and really innovative (like Tesla). This is just making sure that you use the latest technologies to be able to run your basic functions, but even these are not providing the bare minimum.

We are still not fixing what we really crave for: end-to-end design- and runtime governance. And better productivity. But that’s a couple more bridges too far right now. I know that governance is a boring word. But I also know that my customers desperately need it. How else can you manage and monitor your more and more complex integration architecture? When only running on-prem integration, it’s relatively easy. But we lack the real tools here already. We need 3rd party tools to manage and monitor our environment. When adding SaaS, Mobile Apps and IoT to the mix, it becomes more and more complex. Let alone when adding Microservices.

My customers want the following. Today!

  1. An easy way to register and find services
  2. An easy way to compose services (aligned with business processes)
  3. A proper way to handle configuration management and deploy & manage those services (and really understand what’s deployed)
  4. Real end-to-end insight in composed services, including root cause analysis

For enterprises that have a “Microsoft unless” policy, today it’s impossible to use only Microsoft technology for integration purposes. They need 3rd party tools for management and monitoring. Tools that only solve part of the design-time and part of the run-time governance issues we have. Tools that can’t be used end-to-end. Tools that are built by companies we don’t know will still exist in 2 years time. Tools that are not Microsoft tools. Tools of which we even need more than one to provide for end-to-end, fully manageable services.

When, for example, my on-prem BizTalk Server exposes a back-end SOAP or other legacy service as an externally faced REST service (this is called mediation) we can more or less manage and monitor that. In a quite limited way still, because there is no real repository that can register my SOAP and REST services for solution developers to be able to find and start using them. And BAM needs quite some config. But, the major problem is that as soon as the REST service is consumed by API Management which exposes it to Apps and Portals and other consumers, we have no way to find out what happens anymore; no end-to-end tracking & tracing  is possible. Unless we built it ourselves. By coding. Yes, you heard me: Coding; a 20th century concept of building software solutions. So 5 minutes ago!

What we need is platform development (by any vendor by the way, including IBM, SAP, Google, etc.) that takes supportability first seriously. This means: first think about how your customers are going to be able to manage & monitor this when going in production before you start building features. Really cater for management & monitoring. Not build software that cannot or only poorly be managed and monitored. And once we have that, we’ll be more than happy to get better design and development productivity as well. But for now, being able to get rid of the bulk of the cost of any integration environment, namely supporting it on a day-to-day basis (the part of the iceberg that’s under water), would be a good starter indeed.

Thanks for listening. Please spread the word. 🙂

Cheers, Gijs

p.s. I think Microsoft are still way ahead of most other vendors. Pure-play integration vendors don’t have a bright future. It’s a platform game.

On Immutability, Microservices and SOA

Triggered by a tweet by @rseroter (thanks for that, Richard), I decided to write this little post.

One of the major challenges with the implementation of service oriented architecture (SOA) and therefore Microservices, is the underlying data architecture. How can loosely coupled services, that cannot keep state, make sure that the business data remains consistent, reliable and trustworthy?

In every implementation of a (enterprise) service bus or other form of integration middleware, having to cope with for example back-end applications and services that are not idempotent is a nightmare.

What if we could make this all a “thing of the past”, and run our services on an immutable infrastructure? No more updates to existing records. No more keeping track of separate audit information in logs, to record who did what, when and why. The infrastructure just records every bit of information as new information, thereby “elaborating” on the current status. That way, auditability is built in. That way, restartability is not an issue anymore. That way, idempotency is not an issue any more.

Pat Helland writes in his paper: Many kinds of computing are “Append-Only”. Observations are recorded forever (or for a long time). Derived results are calculated on demand (or periodically pre-calculated). And: Normalization is for sissies!

Is immutable infrastructure the holy grail of service oriented architecture and microservices?

IoT as the big driver of Microservices

The Internet of Things (IoT) market will grow tremendously in the coming years. The quantity and diversity of devices that are connected to the cloud and that will spit out data will explode. The generated data will have to be analyzed and these analyses will be used to optimize business processes. New architectures will be needed to accommodate all this and Microservices architecture is probably the way to go and here to stay.

The big hype around IoT has just started and we can expect that the largest bulk of IT investments will be done in this area. Who is not interested in optimizing processes and increasing the margins on delivered products and services? Granular insight in what exactly happens in the complete production- or services chain and being able to predict as reliably as possible what will happen in the (near) future is worth a lot. We’ve got gold in our hands.

In an IoT architecture, the following components are important:

  • The Things; the devices that are connected to the cloud that, without human intervention, can generate (mostly) sensor data and which in some cases can handle instructions to take certain actions.
  • The (cloud) infrastructure and services to execute the following actions:
    • Receiving large amounts of data, generated by the Things.
    • Storing these data.
    • Analyzing these data in real-time, but also by means of predictive analysis.
    • Generating “business events” as a result of these analyses.
    • Orchestrating these business events by means of integration logic.
    • Potentially send back data (instructions) to the Things, so that they can take some kind of action.
  • The (cloud) applications that can handle the business events and can in their turn generate new business events.

The Things are most of the times small machines with their own processor, operating system and sensors and contain some (hard-wired) logic. They can be communicated with by means of lightweight APIs. They are in fact Microservices. Because they are completely autonomous, run in their own process space and deliver a small service. A Thing is the ultimate incarnation of a Microservice!

In order to be able to process the enormous amounts of data at high speed and efficiently, we need an architecture that supports lightweight mediation; capable of distributing the generated events to the right receivers with as little overhead as possible. Such a receiver can for example be a NoSQL database, but also a machine learning service. The NoSQL database can store events for later processing and distribute them asynchronously to other interested parties. This enables offline analysis. The machine learning service can, by means of pre-defined models, determine if certain actions should be taken based on the current event and previous related events. This will then result in a business event. Which in its turn will be published to the lightweight mediation layer which will take care of it. All of this is very time critical and has to happen as soon as possible. Because the next events to be processed are already arriving.

The generated business event will have to process further by the orchestration layer. This may involve multiple business applications and services. This can no longer be called lightweight mediation. We’re talking service bus or application integration middleware here. Time criticality is less important here. What is important in this layer though is guaranteed delivery and transactionality. Traceability is also of high importance in this layer. This orchestration layer is something we most of the times have in place already. Think of traditional integration middleware such as Microsoft BizTalk Server, Tibco or IBM WebSphere Message Broker (and of course many more). The orchestration layer can be hosted in the cloud or, much more still the case, on-premises.

Lightweight mediation all revolves around the following tasks:

  • Lightweight transformation (for example JSON to XML and vice versa).
  • Routing.
  • Applying (security) policies.
  • Lightweight service composition.

In my (humble, ahum) opinion, the lightweight meditation layer is not a layer where you handle (or even need) things like batching, heavy XSLT transformations and long running transactions. In this layer, we should only think about Microservices. Microservices that are indeed lightweight, single task entities that can be composed into solutions by means of lightweight composition.

I foresee a hybrid integration world that is here to stay. Not because EAI can only be hosted on-premises or because we want to run integration solutions in the cloud so desperately, but because cloud integration in an IoT world is about Microservices and lightweight mediation. And this realm needs to be coupled to (often on-premises) infrastructure by means of transactions and orchestrations. Hybrid integration, in my opinion, is about connecting a Microservices architecture to traditional integration middleware, no matter where the latter is hosted. The big challenge with this is to provide for end-to-end run- and design time governance. We’re certainly going to need that.

IoT is the big drivers for Microservices and integration specialists will have to get to know both integration realms and be able to apply the right (combination of) patterns by means of the right (hybrid) technologies. This is a great era to live in, for us integration folks!

Cheers, Gijs

BizTalk Server, Microservices & APIs

This week I attended Integrate 2014 (a.k.a. The BizTalk Summit). It was great meeting up again with lots of folks I’ve got to know in the past 14 years (veterans like Tom Canter, Brian Loesgen, Bill Chesnut, Richard Broida and the “younger generation” like Mikael Sand, Saravana Kumar, Michael Stephenson, Kent Weare and many more, including lots of Dutchies and Belgians), since starting to work with BizTalk Server in 2000. It was also nice to meet some of the Microsoft product team architects (online and offline, Guru Venkataraman and, offline Evgeny Popov) getting to understand their vision.

What keeps on surprising me however, is how bad Microsoft is at positioning their great stuff and how good they are at confusing us all, including the integration specialists and customers. The community will need to help them with bringing the right message. Customers deserve this. They need to be comforted. After all, they spend hundreds of thousands or even millions of dollars and euros on implementing, supporting and migrating their middleware solutions and their businesses are relying on it.

I think it’s quite simple. In integration, we have a couple of things to take care of, now and in the future:

  • Message based A2A integration
  • Message based B2B integration
  • API based B2B and B2C integration
  • The underlying SOA architecture. Because folks, that’s what it is. At least the SOA as I have always understood and applied it.

Thinking in (orchestrated) task, entity and utility services is crucial. How to expose and compose these services and how granular they are (micro, nano or macro), is basically not that important. But it depends on the consumers of these services.

BizTalk Server is integration middleware that you can use to create an on-prem SOA architecture and use for message based A2A integration. It can also take care of “traditional”, message based B2B integration. We use SOAP, flat files, EDI and XML to exchange stuff in a very reliable and manageable way. We include MDM and other data services to take care of (master) data. And in order to expose (business) interfaces as APIs, we use the REST/JSON adapter. That way, creating hybrid buses using a federated bus architecture.

BizTalk Services (MABS) is the newer, cloud based integration offering. It is basically a B2B (EDI) gateway. A very logical place to have that in the cloud, since the cloud is a big DMZ and B2B is about structured communication with customers and partners outside your own firewalls. It’s a pitty though that this has not built on further since its first release 2 years ago (but we now know why: the team has shifted focus to the new microservices architecture and tooling).

Now we have two more things: APIs and Microservices.

APIs expose your business services to Apps and Portals  in a managed way. Mashing up services. With granular security. And you can even monitize them that way, exposing them to affiliates and other partners. For an indepth (IMO) view of API management and how it relates to this, please check this article.

Microservices are there to easily create solutions by combining functionalities exposed by these services. But again, these services are juse plain old utility, entity and task services. But now exposed as lightweight JSON/REST. And probably more granular. So they can more easily be versioned and deployed independently. Check out Martin Fowler’s vision on microservices here (also read his comment on SOA vs. Microservices). They can also be exposed through a gateway. See this article.

Service (micro, nano or macro, don’t care) composition is something you can do by means of integration patterns, implemented through some sort of pre-defined proces, being an itinerary, orchestration or workflow (but probably in the not to near future, composition is also something that is taken care of by using NoProcess, based on AI: the system just automatically finds out what to do best and which services it will need doing that, and will constantly finetune that automatically).

What’s going to make or break your (hybrid) architecture is design- and runtime governance of these services. In order to be able to compose services into solutions, you’ll have to know what services are available and how you can use them. In order to be able to manage and monitor your solutions, they have to be instrumented in a standard way, feeding management and monitoring solutions in a real-time fashion. API Management (by Azure API Management and Nevatech Sentinet) is doing that for APIs. SOA governance (by Nevatech Sentinet and SOA Software) is doing that for integration middleware. These somehow have to be tied together to provide end-to-end runtime- and design time governance (including ALM), tying in with your development environment. If I were Microsoft, I’d just buy the best solution and integrate it in the platform. Just like they did with Apiphany.

Now let’s hope that Microsoft:

  • Keeps the BizTalk Server brand for on-prem A2A and B2B integration and building a SOA.
  • Keeps the BizTalk Services brand for cloud based B2B integration.
  • Comes up with a new brand for the cloud based (micro)services architecture and tooling.
  • Versions and roadmaps these offerings separately
  • Clearly depicts usage scenarios, including hybrid solutions. A couple of good architecture pictures should suffice.

On a final note, check out Next Generation SOA. Thomas Erl and his co-writers have done a great job on simply explaining it and it all fully applies to modern architectures including APIs and microservices. It’ll really help you think clearly about complex, services based solutions.

Cheers, Gijs

 

BizTalk productivity myths

Fueled by competitive, aggressive marketing efforts and other fun reasons, I personally run into a lot of customer engagements lately that revolve around the assessment of the quality of integration architectures and the productivity of the integration team when it comes down to time-to-market of new or improved processes and composite services.

Sometimes we also run into bake-offs, where we have to complete a set of “integration tasks” or worse “build and integration between x and y” and the competitors have to do the same and then the company makes a decision based on the speed of delivery.

First of all, let me make a bold statement here: It does not matter which integration middleware you use, bare tool development productivity will be more or less the same but more importantly is often not relevant at all because it comprises such a small portion of the work to be done.

Now let me try to explain that. In order to be able to compare development productivity, a fair comparision has to be made. Apples should be compared with apples and not with pears. This proves to be quite hard in practice. Of course, the effort put in creating an XSD Schema, creating an XSLT tranformation or setting up the endpoints can be compared quite easily. The same applies to routing rules. And maybe also to orchestrations. But that’s often not how these comparisons or bake-offs are designed.

The problem most often is, that the difference lies in the integration patterns that are used. A fair comparison can only be made when comparing implementations that follow the exact same integration patterns. The familiar patterns we all know from Hohpe and Woolf (for reference: eaipatterns.com).

It is often said that the biggest competitor for BizTalk (or for that matter, any integration middleware product) is the (.Net) developer. And that’s so true. How easy is it, to integrate two systems by just creating a peer-to-peer integration based entirely on code and direct database access? Sure, it works. But can you reuse it? Can you expose the component that fetches data from one system as a reusable service? So that you can create other compositions and processes? Building a SOA?

Integration middleware like BizTalk Server guides or sometimes “forces” you to use the right integration patterns where other tools are sometimes not that strict. The main reason for adhering to proven patterns is that the middleware has to be scalable, reliable and secure. And of course we don’t want to end up with spaghetti-through-a-broker. We want an ESB pattern. It also has to be possible to bridge sync and async connections. Transactions have to be handled. Guaranteed delivery has to be possible. The 8 principles of service design have to be followed (see previous posts).

So, in short: Yes, it may occur that building an integration between two systems takes longer (or shorter) in one product compared to another, but… make sure to compare apples to apples.

My advice for companies designing integration development productivity PoC’s or bake-offs:

  1. Make sure that the parties implement the same integration patterns
  2. Make sure that the 8 principles of service design can be validated in a SMART way
  3. Make sure that the solutions they implement are supportable
  4. Always keep in mind that 75% of the effort lies in discussing and testing interfaces

Hope you find this post useful. Feel free to share and use (parts of) it when you need help in similar situations.

Cheers, Gijs