The end of monolithic platforms

With the rise of functionally rich PaaS services delivered by the so called cloud mega vendors such as Microsoft, quite a lot has changed. First of all, it has resulted in confusion in the market, but right now we can see that organizations are really starting to understand the philosophy and embracing it.

Think in services

Especially in the early days it was difficult to understand for both consumers as suppliers of cloud services. For data- and integration products it was the hardest to grasp. In yesteryears, it was quite easy for organizations with a Microsoft-unless policy: if you wanted a solution for data you got SQL Server and if you wanted a solution for integration you got BizTalk Server. Very easy to compare features between those products provided by Microsoft and other vendors such as Oracle or IBM. Work through a checklist, score and choose.

Nowadays, these kind of data- and integration functionalities are no longer delivered by products but by a set of cloud services. This has at first resulted in great confusion. What do we compare with what? Especially when comparing a set of cloud services on the one hand and a monolithic platform on the other.

Say goodbye to platform “products”

Even for Microsoft this was difficult. Because on the one hand there is a very powerful message because Microsoft Azure provides all building blocks to build a mature integration platform, but on the other hand Microsoft has to compete with some pure-play vendors that provide monolithic productds. Products that are very good at one thing. And in which the various components are 1:1 dependent on eachother. Microsoft also provided one of those products: BizTalk Server. A fantastic integration product, but monolothical and not very service oriented. The processes in such a product are designed beforehand and therefore “baked-in”. And therefore not very flexible.

However, we saw that competing with the pure-plays was (and still is) not easy. At one moment in 2018, Microsoft chose to rebrand and bundle the services in Azure that are needed to build an integration layer to “Azure Integration Services”. The same happened to “IoT Suite” for example. Of course this was done to make it easier for potential customers to make product comparisons. But personally, I thought this was an admission of weakness by Microsoft. It basically does not make sense at all. It is simply the power of the platform to provide a set of loosely coupled services with which you can build very flexible integration layers. And besides that, you get the whole Azure platform with it that you can make part of your end-to-end processes, potentially being integration-, data-, app- or collaboration oriented solutions.

The same goes for data solutions. These days we have relational data stores, table stores, document stores, blob stores, data lakes, etc. It has been tried to put that all in SQL Server in that past. In Microsoft Azure, these are all “loose” services that all run in their own scalable containers and that can be connected to eachother through (among others) Azure Data Factory, to create a data platform that suits your organization best. And on top of which you can deliver all kinds of analysis services such as Databricks or PowerBI in order to create information from the data.

Non-functional aspects

What is very interesting, is that Microsoft has not only provided all these great features as services, but at the same time has invested a lot in architecture guidance and non-functional aspects. Where you had a silo application such as BizTalk Server and SQL Server (and comparable products from other vendors) before, in which all the management and monitoring tools where built-in, you can see right now that on top of all cloud services you have tooling for end-to-end management and monitoring. For example, for deploying workloads by means of infrastructure-as-code and to gain insight in the relationships between services. Right now we can deal with loosley coupled platform solutions that at the same time are very manageable and quite easy to monitor. That’s real progress! I predict that monolithic integration-, data-, collaboration- and app platforms will die a slow death. Long live PaaS!

Cheers, Gijs

Serverless. Basta!

I was once a system programmer in a Unix world. Brilliant OS. The first version I got to know really well (and I mean, deep down really really well) was System V. Later on, I worked with (Open)VMS and also spent quite some time porting stuff to Linux (Redhat). I mean, communication protocols, compilers, etc. The really hard stuff. And fun it was. Getting stuff to work was really very fulfilling. Especially in a time where you had to build your own “platforms”, like 4GL, RDBMS and integration middleware in order to provide your internal consumers with better solution building productivity. Building all this stuff was awesome!

And then I was introduced to Windows. Lipstick on a pig it was. And in quite some cases still is. We were serving 8 developers on a 486 with Redhat Linux and with Windows 3.11 on the same machine we only got to serve 1. Aaaargh. But hey, developing really visual stuff was a nice change as well.

But the best thing today is: we don’t have to care anymore, because we’ve got PaaS now! Who cares about servers? We just want to run our workloads serverless. Let the hardcore developers build this cool platform stuff (and make it very very easy for us), that we ordinary folks can just use to deploy everyday workloads.

But, lately I got introduced to this relatively new phenomenon of containers. I think that’s a step back. At least, for us people who just want to deploy common workloads. I understand that for architects and developers working for large scale B2C companies (Facebook, Google, Amazon, etc.), containers and K8S and stuff like that is great. But for the average company, it’s overkill. And overly complex. And back to virtual machines, but just a bit smaller and more contained. And somewhat easier to deploy.

But, we don’t need that (in our platform). We just want less complexity. And more less. Serverless. Basta!

Just my 0.02 Q* of course.

Cheers, Gijs

*Want some for free as well? Just register here through my personal invite link.

The rise of the Ethereum blockchain frameworks

Lately I’ve done quite some research on blockchain. I’ve been involved in a number of inspiration sessions for our customers, trying to come up with good use cases for blockchain in their respective industries. We’re in the process of defining and executing some exciting PoCs (proof of concepts) right now, mainly in the logistics vertical.

The Ethereum blockchain seems to be(come) the dominant platform for all kinds of initiatives. Ethereum is also doing quite well from a token market value point of view at the moment and that’s not hard to understand. It’s the goto platform for anything that has to do with smart contracts. A lot of current ICOs (initial coin offerings) run their technologies on the Ethereum blockchain. Some of them are good and probably have a bright future, some of them are hyped but basically hot air, and some of them are right out shady and probably scams. But hey, a new crypto sucker is born every day as Microsoft’s blockchain principal architect Marley Gray said during a keynote on a blockchain conference.

On the Microsoft Azure platform, it’s quite easy to setup an Ethereum blockchain. With the CoCo framework, Microsoft has built exciting preview stuff that can run on multiple blockchain platforms. Check out the paper here.

For me it’s clear that the blockchain technology itself is not the interesting part. Of course having immutable records and a consensus model to cut out the middle man is *very* important, but the blockchain itself will become mainstream like any other database technology, like SQL or NoSQL. What makes it worthwhile is the concept of smart contracts. And that’s what the Ethereum blockchain is quite good at. It is however quite hard to develop and test smart contracts. I foresee that in the short term, lots of startups will come up with smart things around smart contracts.

I’ve bumped into two of them that are worthwhile mentioning. Also because they are both legitimate and did their ICO’s in North America:

  1. Blockmason. The have developed the Credit Protocol on top of Ethereum, which takes care of a very badly needed smart contract for handling credit (on which this world turns), including the automatic settling of it between parties. They have developed this technology before they did their ICO. And they are SEC compliant, which is a first in crypto land. They have interesting partnerships, like the one with Coral Health who are doing a pilot with their technology on settling payments between doctors, patients and insurance companies. Without the need for a third party. Very interesting technology, for which they have applied for patents. I think lots of initiatives will use their technology to implement similar scenarios. Their token is named BCPT. Checkout Blockmason.io for full details.
  2. Etherparty. They have created the technology to make the development of smart contracts easier. Basically they do for smart contracts what WIX did for websites. Without any programming knowledge you can develop smart contracts that run on any compatible blockchain, but the most used one is obviously Ethereum. I foresee that they will come up with lots of out-of-the-box templates for smart contacts making the implementation of blockchain initiatives a lot quicker. Their token is named FUEL. Checkout Etherparty.com for full details.

So, just like we had frameworks on top of SQL databases and integration software, we’re now seeing the rise of smart frameworks and templates on top of blockchain. We’re definitely coming out of the blockchain stoneage. Exciting times!

Cheers, Gijs

BizTalk open source: a win win?

Yesterday, Microsoft announced that its on-premises integration middleware product BizTalk Server will become partly open source. First step is that all the 10K+ schemas (mostly B2B, EDI) have been released and are now available on Github.

Next step will be to make it possible for the community to contribute to the adapters, pipelines and tools.

My take on all this, is that it has the potential to become a win win for both Microsoft, partners and customers, provided that a number of things are executed well. Let me try to explain how:

  1. Microsoft BizTalk Server is rapidly turning into the on-premises LOB proxy (or gateway) that makes it possible to bridge legacy on-premises applications to the Azure iPaaS (mainly Logic Apps and API Management, plus Service Bus, Event Grid, etc.). This is how Microsoft IT has positioned it during Integrate 2017 in London. Bottom line in this (great!) architecture: BizTalk = legacy gateway and iPaaS = all the logic and modern integrations.
  2. Becoming (partly) open source, means that the community can contribute to the quality and number of schemas, adapters, pipelines and tools. This makes the role of BizTalk as an on-prem LOB proxy even more relevant, enabling even more legacy applications to bridge the gap to the public cloud. BizTalk basically has the potential to become an even greater on-ramp to the public Azure cloud.
  3.  Microsoft will remain focused on making sure the core BizTalk engine remains relevant and can run on the latest versions of their own platform (Windows, SQL Server, Visual Studio, .Net) and provides a terrific bridge to the public Azure cloud. This includes the non-functionals like end-to-end hybrid monitoring and management.
  4. The community has to be supported and made enthusiastic about contributing to the, what we can basically call “on-premises LOB adapters”. This is going to be the hard part of this open source endeavor, in my opinion. But, as we have seen in the past with ISVs leveraging the popularity of BizTalk to position and sell their adapters and basically “using” BizTalk to become more successful themselves, open sourcing the adapters will potentially have the same impact. But this time it’s not about leveraging BizTalk, but leveraging the hybrid integration stack. Time will tell. In the meantime, Microsoft can stay focused on the core and the bridge to the public cloud and in the meantime probably can transfer a couple of engineers to the iPaaS teams.

My $.02 only.

Cheers, Gijs

The can-you-do-that-guys

Here at the #Integrate2017 event in London (26-28th June), I loved the keynote today by Jim Harrer (Microsoft Pro Integration Group PM). During the last 5 minutes of his presentation, he nailed it!

As I wrote before in another blog post (“Integration is just one of the skills needed”), iPaaS is not about just integration, it’s about creating business apps. Integration is part of the multi-disciplinary teams that build solutions. And these solutions are more and more built by using the 80+ Azure PaaS building blocks (see my most recent blog post “iPaaS, what else?”). These building blocks are not just about moving information from one location to another (including from and to hundreds of SaaS apps), but more and more also include Big Data and AI (artificial intelligence) capabilities. Making it possible to integrate things like cognitive services, machine learning, etc. Creating real end-to-end business apps that the business wants, now! With technology that until a year ago, was just not available (at a reasonable cost) to smaller sized companies.

Being an integration guy, you do have a special role in the teams. You are the guy that connects the building blocks and makes sure that the business app actually is resilient. And that you can properly monitor and manage the solution.
The time-to-market for these apps is phenomenal. Instead of weeks or months, you can create value in hours or days! And the speed at which Microsoft is adding not only the functionalities but, more importantly, the non-functional features is amazing. They build the platform, we build the solutions!

During the conference we’ll of course learn about new features that have just been released or will be released in the very near future. But to me, that is not the most important part anymore.

The IT world to me is clear now: integration folks have to become “the can you do that guys“.

We need to show our customers what is actually possible by assembling all these great building blocks into very valuable business solutions. Just do a PoC or Pilot and show the customer you’re at what you can build in such a short time. Sooner or later, your customer will also be saying “iPaaS, what else!“. Our customers are all becoming software companies. We can help them do just that!

Cheers, Gijs

iPaaS should not become your Trojan Horse

I’m currently involved as a cloud architect at an insurance company, working on the hybrid cloud reference architecture. They are making a move from a hosted environment to a hybrid cloud. And cloud engineers are working on the detailed designs for networking, storage, subscriptions, identity & access and integration. Integration between the private cloud and the public cloud as well as enterprise integration and B2B integration.

The customer is currently using BizTalk Server for enterprise- and B2B integration. The hosted environment is managed by a 3rd party as well, which means that the customer’s IT department only focuses on the functional management of applications. This also applies to BizTalk Server. New services are provisioned by filling out a form, waiting a couple of days or weeks and getting access to the new service. All is well so far, except for time-to-market, cost and being able to make use of the latest and greatest technologies.

Hybrid cloud is needed to shorten the time-to-market and innovating business processes and at the same time decrease IT spend on infrastructure. The whole thing about (hybrid) cloud is the on-demand characteristics and also being able to move away from traditional solution development to agile solution development, deployment and operations. Because provisioning is so fast, solution development can become much faster as well.

This is not an easy endeavor I can tell you.

Apart from the organizational aspects (devops is a topic on its own; is your organization ready for it?), the constantly evolving cloud and at the same time the constantly applied pressure from the business (“hey, we now have shorter time-to-market; let’s see it!”) makes life not really easy for the IT folks. We’re working on the reference architecture, but in the meantime several projects are underway to deliver cloud based solutions. Some SaaS, some PaaS and some we-don’t-really-know-what-kind-of-aaS. Every day we run into issues with regard to security and governance aspects. You’re really going to store customer senstive data in a NoSQL database running on a public facing linux box? Let’s think a little bit more about that. Refactoring the (hybrid) cloud solutions that have been delivered so far is the first thing we have to do once the reference architecture and the detailed hybrid cloud designs are done. That, and making sure the organization can actually cope with hybrid cloud deployments, management and governance.

In the “good old hosting days”, security was designed, applied and governed. Processes were in place and everything worked fine. Today however, checking or unchecking a single box in the Azure portal can have quite the impact. Suddenly, data leaks are possible. And we thought all was covered. Not.

Infrastructure as Code (which not only applies to IaaS but also to PaaS), is a mandatory thing. Clicking in a portal should be avoided. Cloud resources have to be deployed and managed by means of code. Versioned code. Code that has been reviewed and tested. Since full DTAP environments are rapidly becoming a thing of the past, your DTAP process has to be in place pretty well in order to prevent screw-ups with production data.

Why is the title of this blog post about iPaaS (Azure Logic Apps, API Apps, Functions, API Management, Service Bus) and a Trojan Horse specifically? Integration is at the core of everything when it comes to hybrid cloud. All aspects of the hybrid cloud architecture are touched here. Before you know it, things are tried out and put in production with all the potential risks as a result. Let’s protect ourselves from that.

Four things are important here:

  1. Have a reference architecture for hybrid cloud. New rule apply here! Hybrid cloud is not private cloud. This should at the very least contain the architecture principles and high level requirements that apply to all hybrid cloud solutions deployed. Example principle “Passwords should be stored in a safe place”. High level requirement resulting from that “Passwords used in scripts and code should be stored in Azure Key Vault”.
  2. Document the Solution Building Blocks. Azure is box of legos. Make sure that you know how to use all those building blocks and make sure that everybody knows the rules about how to use them in which scenario. Solution Building Blocks are not evil, but necessary artifacts. When do you use SQL Database, Blob, DocumentDB? How does security relate to these choices?
  3. Hybrid cloud needs hybrid service management. Make sure your IT service management sees your private cloud, hosted cloud and public cloud as one hybrid cloud and is able to manage that.
  4. Design and apply the right level of governance. Architectures, principles, requirements and solution building blocks are completely worthless if you don’t make sure they are actually used (in the right way). Peer reviews. Signed-off solution designs. Random inspections. These are all necessary things that you should cater for in your devops teams.

And remember, these things apply to your organization, but also to your IaaS, PaaS and SaaS solution vendors.

Let’s keep the Trojans out of your hybrid cloud!

Cheers, Gijs

Integration is just one of the skills needed

In modern enterprises, business solutions are built by agile teams. Agile teams are by design multi-disciplinary. The Product Owner is responsible for the product backlog and the team(s) build and implement the stuff needed. In these modern enterprises, the need for an innovation & differentiation layer on top of systems of record is an absolute necessity to enable shorter time-to-market of these solutions and, in some cases, digital transformation.

In the Microsoft world this innovation & differentiation layer is basically provided by Microsoft Azure and Office 365. The first one is needed for the (big) data, business intelligence, integration, process execution & monitoring and web & mobile UX capabilities, and the latter one for the collaboration and document handling capabilities. In Microsoft-centric application landscapes, Dynamics 365 will be the way to go for your systems of record for customer interaction (CRM) and resource planning (ERP). In the future, whenever you’re ready as an enterprise, the whole two-speed or bi-modal approach will be a thing of the past, and the cloud infrastructure will enable just one-speed IT; full speed! But that will still take several years before it has become a reality that most enterprises can deal with. Because it’s not only an IT thing, but more an organizational thing (how do you keep on adopting these agile solutions; won’t we become tech-fatigue?).

Many skills are needed in the agile teams we deploy today. And when scaling the teams, a layer on top of the teams is needed to manage the portfolio and program aspects in a better way. The Scaled Agile Framework (SAFe) is a good example (I personally have experience with; I’m a SAFe Agilist :-)). Quite some enterprises have implemented this, or an alternative to this like LeSS (Large-Scale Scrum). This also facilitates DevOps at a larger scale.

Even in agile environments, enterprise architecture is still needed 😉 He or she is responsible for the overall architecture of the solutions that get built. Business architecture, information architecture and technical architecture are all still very important things. It defines the frameworks within which the solutions should be built. The agile teams work within these frameworks.

We also more and more see that agile teams focus on certain business domains. And that common practice within microservices architecture is that we don’t try to build stuff that can be reused by other teams as well. Unless we have one or two specific teams working on re-usable, enterprise wide functionality. It’s all about business value first.

Is the integration competence center something that is helpful in such an environment? Well, the role of the ICC will change. The ICC (just like any other CC) will not be involved as a “sleeping policeman” (pun intended) anymore. The ICC will basically be part of the enterprise architecture role, focusing specifically on the frameworks (to which the agile teams should adhere; comply or explain) and the enterprise wide functionality. Everything else will be solved by the agile teams. For specific domains. That way we are way more flexible and we still build re-usable enterprise wide stuff where absolutely needed.

Do integration projects still exist in the future?

I think not, or only in a very limited number of cases. Integration is just part of business solutions and should also be treated like that. The API developer, the UX developer, the DBA, the security expert, etc. They’re all integration capable team members. The commodity integration needs can be fully handled by the agile teams like that. And for the integration specials, like re-usable building blocks and patterns or the more difficult one-offs, we just involve the ICC, which probably is a separate agile team in the larger organizations. Their domain is cross enterprise. But they will not be roadblocks anymore, that used to slow down business solution development.

Will this change the way system integrators work? Absolutely. The more we can do to deliver complete agile teams (including big data, UX and collaboration folks), the better we can serve our customers! To become agile too, and shorten their time-to-market and maybe even transform and redefine their business models. As an integration specialist or integration team alone, you can’t do that. The folks that understand and can implement the whole Microsoft platform to deliver real business solutions are going to be the ones that enterprises will turn to…

Cheers, Gijs

The business value of Microsoft Azure Logic Apps

I’ve written a paper on the business value of Microsoft Azure Logic Apps for Integration. Mainly useful for CIO’s and IT managers considering Azure PaaS services for their Integration needs.

It describes the components of Microsoft Azure, and then drills down into aPaaS and iPaaS to position Logic Apps, API Apps and API Management. Furthermore it describes common integration needs in complex application landscapes such as keeping data in sync, creating composite apps, involving supply and demand chains and integrating apps and portals.

Next it describes the real business value, so that you can explain it to your business stakeholders as well. This includes creating value add on top of commodity SaaS apps, leveraging investments in legacy applications (your Systems of Record), decreasing time-to-market, channel renewal, agility: basically digital transformation using Gartner’s pace-layer application model.

Lastly it describes the different integration themes that Logic Apps can help you with.

Enjoy!

Please share as you like.

Cheers, Gijs

My take on the Gartner iPaaS MQ 2016

Yesterday, Gartner released its Magic Quadrant (MQ) on Enterprise Integration Platform as a Service 2016.

The strategic planning assumption this is based on reads “By 2019, iPaaS will be the integration platform of choice for new integration projects overtaking the annual revenue growth of traditional application integration suites on the way”.

I think they’re right.

Microsoft did not make the Leaders quadrant this time. This is mainly because of the fact that the 2016 MQ is based on cloud services that are generally available (GA) and the only service available from Microsoft today in this regard is Microsoft Azure BizTalk Services (MABS). Which is of course far from complete as we all know. And it is based on an architecture which has been rendered obsolete by now, with the arrival of Azure App Service.

The relatively good news is that Microsoft did make it to the Visionaries quadrant, but they still need to let IBM, Oracle and SAP in front of them. That’s not so good.

My take on all this is

  • Gartner correctly positioned Microsoft in this years MQ for iPaaS based on what’s actually available;
  • We should quickly forget about MABS and start looking forward (Microsoft can’t afford to make another architecture and delivery screw-up like with MABS);
  • Microsoft needs to quickly release to the public a stable first version App Service. I really hope Q2 will indeed be GA time for App Service with Logic Apps and that it functionally delivers on its promise;
  • Microsoft needs to strongly position App Service as an Application Platform as a Service but at the same time strongly position Logic Apps with API Apps and the Enterprise Integration Pack (or whatever it will be called at GA time) as the Enterprise Integration Platform as a Service. Customers see them as two different things (although I think that will change in the future, see my earlier post on this).

I strongly believe in App Service, now let’s make sure that Microsoft and System Integrators nail the Ability to Execute as quickly as possible and kick some Mule, Dell Boomi, Informatica and SnapLogic @ss. The strong Azure (Integration) community should use its forces not only to make sure that the world knows about Azure and how to use it, but should also keep on providing the best real-world feedback to the product teams so that they continually make the right choices with regard to backlog prioritizing. I want this to be in the upper right corner in the iPaaS MQ for 2017 and it should be, with all the efforts put into it right now.

We all know CIO’s take Gartner seriously, so Microsoft (and System Integrators) should take Gartner seriously as well.

Cheers, Gijs

Azure Stack use cases

On January 29th, Azure Stack went into Technical Preview.

Update July 14th 2017: Azure Stack is now generally available.

Having discussed Azure Stack with several of my customers in the past weeks, I’ve come to the following list of potential use cases for it (in no particular order):

  • Private cloud environment (duh!): Host your own private cloud with (more-or-less) the same capabilities as the Microsoft Public Azure Cloud. You can maybe even organize visits to your cloud data centers 🙂
  • On-ramp to public cloud: Gently try Azure in your own private environment before you migrate (parts of your) solutions to the public cloud without having to re-architect!.
  • Capex development & test environment: At fixed capex cost, give your development team an environment in which they can code and test. Then deploy it to the public cloud (or private, or hybrid; whatever you want) without having to re-architect!
  • Hybrid cloud: Create hybrid cloud solutions, based on the same Azure architecture. Use your private cloud part of the hybrid architecture for stuff you don’t want in the public cloud. Use public cloud for all stuff that can go there. Mix-and-match, without having to re-architect!
  • Cloud bursting: Run things mainly in your private cloud, and use the public cloud to offload (parts of your) workloads to when there are (seasonal) peaks in your load.
  • Exit strategy insurance: Have the comforting feeling and insurance that when you for some reason or the other don’t like using the Microsoft Public Azure Cloud anymore, you can just migrate your solutions back to your private cloud without having to re-architect!

Just my $0.02 of course.

Cheers, Gijs