Serverless. Basta!

I was once a system programmer in a Unix world. Brilliant OS. The first version I got to know really well (and I mean, deep down really really well) was System V. Later on, I worked with (Open)VMS and also spent quite some time porting stuff to Linux (Redhat). I mean, communication protocols, compilers, etc. The really hard stuff. And fun it was. Getting stuff to work was really very fulfilling. Especially in a time where you had to build your own “platforms”, like 4GL, RDBMS and integration middleware in order to provide your internal consumers with better solution building productivity. Building all this stuff was awesome!

And then I was introduced to Windows. Lipstick on a pig it was. And in quite some cases still is. We were serving 8 developers on a 486 with Redhat Linux and with Windows 3.11 on the same machine we only got to serve 1. Aaaargh. But hey, developing really visual stuff was a nice change as well.

But the best thing today is: we don’t have to care anymore, because we’ve got PaaS now! Who cares about servers? We just want to run our workloads serverless. Let the hardcore developers build this cool platform stuff (and make it very very easy for us), that we ordinary folks can just use to deploy everyday workloads.

But, lately I got introduced to this relatively new phenomenon of containers. I think that’s a step back. At least, for us people who just want to deploy common workloads. I understand that for architects and developers working for large scale B2C companies (Facebook, Google, Amazon, etc.), containers and K8S and stuff like that is great. But for the average company, it’s overkill. And overly complex. And back to virtual machines, but just a bit smaller and more contained. And somewhat easier to deploy.

But, we don’t need that (in our platform). We just want less complexity. And more less. Serverless. Basta!

Just my 0.02 Q* of course.

Cheers, Gijs

*Want some for free as well? Just register here through my personal invite link.

A Darwinian view on algorithms

A common notion these days is that “data is the new gold”. My opinion on this is different. Data are just common natural resources. Not that scarce and therefore not worth that much. But when combined in a smart way, it can become gold. So, what is the new gold is algorithms.

“Algorithms are the new gold, not data.”

Companies who found this out, after they initially still quite naively started providing neat features to end-users are Facebook and Google. But, by now they have become behavior manipulation empires (see Jaron Lanier’s great TED talk on this). The algorithms these companies apply are opaque. We don’t have a clue what’s happening there and to be frank, sometimes they probably don’t know themselves either.

As has been seen in the news the last couple of weeks, these algorithms can turn into real Frankensteins sometimes. But I presume lots of smart data scientists work at these companies and they know what they do. Hopefully.

But what will happen when not only the data that is used as input can be manipulated (for example by generating fake data through the use of trolls, from Russia or wherever), but also the algorithms? What if these algorithms can be injected with malicious code? And produce results that better fit the people or organizations that put the code there?

We get manipulated by algorithms more than we know and probably is good for us. The singularity is already here, but we don’t want to admit it yet. By far the best example is that probably more AI generated babies are born these days than the good old fashioned true biological ones.

What if the evil people can influence these algorithms? And start generating “odd” (but “more desired”) couples by influencing Tinder or other platforms that people use to find potential life partners? What if during WW2 big data would have been available and algorithms could have been manipulated? Would we all have spoken German now? That’s a pretty scary thought. And there are plenty of dictators or dictator-like folks in high places right now that could do this.

So, what’s next? How can we protect the algorithms? More quickly (and openly) get to know when systems have been hacked? Is open source the way to go? Shouldn’t we let these platforms get too big? Do they need more inside and outside governance? These will be the major topics we have to deal with in the short term, or otherwise evolution will not be going slowly and gracefully anymore and Darwin’s insights will be obsolete soon.

Cheers, Gijs

The El Niño of the digital ecosystems

After digitizing current processes we started optimizing them. All purely focused on efficiency. Saving costs. Improving margins. Happier customers. Now is the time to start the real digital transformation. A revolution is needed.

Many organizations are working on process optimization. They are looking at eliminating human interaction by means of digitization, integration and robotization. Organizations working LEAN will go a bit further and will continuously try to improve the efficiency of processes. Mostly, this is all based on current revenue streams and business models. Also, it is assumed that the organization has a relevant function and will keep that function in the ecosystem.

What’s happening around us?

Every organization is part of an ecosystem. Ecosystems that become more and more digital. By exchanging information digitally between parties in such an ecosystem, these ecosystems will become more and more efficient. First we used EDI and today, in modern application landscapes we use APIs more and more. APIs that are becoming smarter and smarter and can find eachother more or less automagically. APIs that are a direct interface to an existing application. An application that most of the times is a system of record. Not a very “interesting” application most of the times, only capable of handling basic administrative needs such as handling orders and ship notices.

The middleman in crisis

There are only a few systems of record that really deliver true value add for an organization. The real value add of an organization usually is in the uniqe position in an ecosystem and the processes it delivers by means of a combination of all kinds of software systems, on-premises and in the cloud. But what happens when your combination of services suddenly is not relevant anymore? Because ecosystems have found a way to solve the isssues in another, much better way? Your organization actually doesn’t have a function anymore? Because you are actually the middleman? Crisis!

We are seeing everyday that new ways of working together in ecosystems evolve rapidly. The most important unique selling point of blockchain is to cut out the middleman. This is possible because the system enables collaboration without having trust in place. You don’t have to trust eachother. The trust has been digitized. By means of smart contracts. Look around you. Blockchains that enable peer-to-peer lending, that make wholesale and banks obsolete, that replace marketplaces and trading systems, etc. etc.

Blockchain is a revolution

Blockchain is the El Niño of the digital ecosystems; it changes traditional collaborations between companies and disturbs the balance. We will probably see an Al Gore like person in the near future who will warn us of the dangers. But, just like in nature, it will take a while but a new balance will evolve. After which the landscape will be thoroughly changed. There will be many victims, but sometimes that is needed to make the next step, as a whole.

If your organization will become a victim depends on how much value add you deliver. Oftentimes this depends on the need for physical assets or infrastructure. For example: A supplier of green energy who does not have assets but only handles the trade and contracts can be replaced by a blockchain. The supplier that also owns the wind- and solar farms and the infrastructure for homes and factories will be much more difficult to replace. Until the time that you’ll be able to generate 100% of your own energy need. But if energy surplus needs to go back into the net, you’ll be needing infra again. Uber can easily be replaced by a blockchain. Uber doesn’t have assets. And really no value add. Blockchain is a revolution. In every boardroom this should be on the agenda permanently. Make sure you understand the technology and its impact. Don’t be an American member of congres who doesn’t understand Mark Zuckerberg’s business model and technology. Work out a solution and really start innovating!

Cheers, Gijs

 

Serverless bitten in the ass by relational databases

Thomas Erl wrote this wonderful book on the SOA’s 8 principles of service design quite a while back. Together with Hophe and Woolf’s Enterprise Integration Patterns that’s basically almost all you need to know by heart, when starting out as a rookie (integration) developer.

No matter what software architecture used, the things learned from these books should always be applied. So, also in this new serverless era, the same applies.

But, the hardest part of every software architecture is the data architecture. You can fantastically modularize functionalities and integratie them at will, but if the data behind the modules is one big blur of relational mess, good luck with your APIs, principles, patterns and Devops then!

Is SQL Server (or any other relational database) actually witholding us from creating true serverless, reusable and autonomous APIs? Will the Graph or DocumentDB or CosmosDB help us out? Is big data coming to the rescue? Schema on read? CQRS? Blockchain based transaction ledgers?

Probably 80% of data in any given relational database is not relational. So why use a SQL Server database then? Why shoot ourselves in the foot right at the start?

If we want to become true serverless, not only do we need to adhere to the principles of service design and integration patterns, but also need to rethink our data architecture.

Eventual consistency looks like a nice concept, but in a real-time world quite hard to implement and make successful. But maybe it’s our only option? In any case, the use of relational data storage will in my opinion decline quickly. SQL Server will *have* to morph into something new, with less emphasis on the relational aspect. I know Microsoft have already started that transition.

Getting the right architecture guidance on serverless in combination with data storage is going to be crucial in the coming time.

Cheers, Gijs

 

The rise of the Ethereum blockchain frameworks

Lately I’ve done quite some research on blockchain. I’ve been involved in a number of inspiration sessions for our customers, trying to come up with good use cases for blockchain in their respective industries. We’re in the process of defining and executing some exciting PoCs (proof of concepts) right now, mainly in the logistics vertical.

The Ethereum blockchain seems to be(come) the dominant platform for all kinds of initiatives. Ethereum is also doing quite well from a token market value point of view at the moment and that’s not hard to understand. It’s the goto platform for anything that has to do with smart contracts. A lot of current ICOs (initial coin offerings) run their technologies on the Ethereum blockchain. Some of them are good and probably have a bright future, some of them are hyped but basically hot air, and some of them are right out shady and probably scams. But hey, a new crypto sucker is born every day as Microsoft’s blockchain principal architect Marley Gray said during a keynote on a blockchain conference.

On the Microsoft Azure platform, it’s quite easy to setup an Ethereum blockchain. With the CoCo framework, Microsoft has built exciting preview stuff that can run on multiple blockchain platforms. Check out the paper here.

For me it’s clear that the blockchain technology itself is not the interesting part. Of course having immutable records and a consensus model to cut out the middle man is *very* important, but the blockchain itself will become mainstream like any other database technology, like SQL or NoSQL. What makes it worthwhile is the concept of smart contracts. And that’s what the Ethereum blockchain is quite good at. It is however quite hard to develop and test smart contracts. I foresee that in the short term, lots of startups will come up with smart things around smart contracts.

I’ve bumped into two of them that are worthwhile mentioning. Also because they are both legitimate and did their ICO’s in North America:

  1. Blockmason. The have developed the Credit Protocol on top of Ethereum, which takes care of a very badly needed smart contract for handling credit (on which this world turns), including the automatic settling of it between parties. They have developed this technology before they did their ICO. And they are SEC compliant, which is a first in crypto land. They have interesting partnerships, like the one with Coral Health who are doing a pilot with their technology on settling payments between doctors, patients and insurance companies. Without the need for a third party. Very interesting technology, for which they have applied for patents. I think lots of initiatives will use their technology to implement similar scenarios. Their token is named BCPT. Checkout Blockmason.io for full details.
  2. Etherparty. They have created the technology to make the development of smart contracts easier. Basically they do for smart contracts what WIX did for websites. Without any programming knowledge you can develop smart contracts that run on any compatible blockchain, but the most used one is obviously Ethereum. I foresee that they will come up with lots of out-of-the-box templates for smart contacts making the implementation of blockchain initiatives a lot quicker. Their token is named FUEL. Checkout Etherparty.com for full details.

So, just like we had frameworks on top of SQL databases and integration software, we’re now seeing the rise of smart frameworks and templates on top of blockchain. We’re definitely coming out of the blockchain stoneage. Exciting times!

Cheers, Gijs

BizTalk open source: a win win?

Yesterday, Microsoft announced that its on-premises integration middleware product BizTalk Server will become partly open source. First step is that all the 10K+ schemas (mostly B2B, EDI) have been released and are now available on Github.

Next step will be to make it possible for the community to contribute to the adapters, pipelines and tools.

My take on all this, is that it has the potential to become a win win for both Microsoft, partners and customers, provided that a number of things are executed well. Let me try to explain how:

  1. Microsoft BizTalk Server is rapidly turning into the on-premises LOB proxy (or gateway) that makes it possible to bridge legacy on-premises applications to the Azure iPaaS (mainly Logic Apps and API Management, plus Service Bus, Event Grid, etc.). This is how Microsoft IT has positioned it during Integrate 2017 in London. Bottom line in this (great!) architecture: BizTalk = legacy gateway and iPaaS = all the logic and modern integrations.
  2. Becoming (partly) open source, means that the community can contribute to the quality and number of schemas, adapters, pipelines and tools. This makes the role of BizTalk as an on-prem LOB proxy even more relevant, enabling even more legacy applications to bridge the gap to the public cloud. BizTalk basically has the potential to become an even greater on-ramp to the public Azure cloud.
  3.  Microsoft will remain focused on making sure the core BizTalk engine remains relevant and can run on the latest versions of their own platform (Windows, SQL Server, Visual Studio, .Net) and provides a terrific bridge to the public Azure cloud. This includes the non-functionals like end-to-end hybrid monitoring and management.
  4. The community has to be supported and made enthusiastic about contributing to the, what we can basically call “on-premises LOB adapters”. This is going to be the hard part of this open source endeavor, in my opinion. But, as we have seen in the past with ISVs leveraging the popularity of BizTalk to position and sell their adapters and basically “using” BizTalk to become more successful themselves, open sourcing the adapters will potentially have the same impact. But this time it’s not about leveraging BizTalk, but leveraging the hybrid integration stack. Time will tell. In the meantime, Microsoft can stay focused on the core and the bridge to the public cloud and in the meantime probably can transfer a couple of engineers to the iPaaS teams.

My $.02 only.

Cheers, Gijs

The can-you-do-that-guys

Here at the #Integrate2017 event in London (26-28th June), I loved the keynote today by Jim Harrer (Microsoft Pro Integration Group PM). During the last 5 minutes of his presentation, he nailed it!

As I wrote before in another blog post (“Integration is just one of the skills needed”), iPaaS is not about just integration, it’s about creating business apps. Integration is part of the multi-disciplinary teams that build solutions. And these solutions are more and more built by using the 80+ Azure PaaS building blocks (see my most recent blog post “iPaaS, what else?”). These building blocks are not just about moving information from one location to another (including from and to hundreds of SaaS apps), but more and more also include Big Data and AI (artificial intelligence) capabilities. Making it possible to integrate things like cognitive services, machine learning, etc. Creating real end-to-end business apps that the business wants, now! With technology that until a year ago, was just not available (at a reasonable cost) to smaller sized companies.

Being an integration guy, you do have a special role in the teams. You are the guy that connects the building blocks and makes sure that the business app actually is resilient. And that you can properly monitor and manage the solution.
The time-to-market for these apps is phenomenal. Instead of weeks or months, you can create value in hours or days! And the speed at which Microsoft is adding not only the functionalities but, more importantly, the non-functional features is amazing. They build the platform, we build the solutions!

During the conference we’ll of course learn about new features that have just been released or will be released in the very near future. But to me, that is not the most important part anymore.

The IT world to me is clear now: integration folks have to become “the can you do that guys“.

We need to show our customers what is actually possible by assembling all these great building blocks into very valuable business solutions. Just do a PoC or Pilot and show the customer you’re at what you can build in such a short time. Sooner or later, your customer will also be saying “iPaaS, what else!“. Our customers are all becoming software companies. We can help them do just that!

Cheers, Gijs

iPaaS, what else?

Integration is more important than ever, in all the systems of innovation we create.

With multi-disciplinary DevOps teams, we build and operate solutions that encompass all tiers; from UX to back-end and everything in between. Middleware specialists are juse part of the team and share their knowledge on integration specifics through cross team guilds. More and more, we build those solutions based on the service paradigm. If we don’t need endless scale (if we’re not doing B2C basically), we just build it SOA style using PaaS technology. If we do need enormous scale and continuous innovation (because we don’t want to loose the consumers that get bored easily), we’ll do it microservices style. But we also still have to cope with existing systems. They can be legacy, but they can also be SaaS. Especially in a SaaS before PaaS before IaaS environment, migrating commodity applications to their SaaS counterparts are quick wins. But, these SaaS solutions are still most of the times silo applications that we have to cope with. For these applications, on-prem legacy or more modern SaaS, we need to create wrappers, so they can expose their task- and entity services or API’s so we can compose them into greater solutions. As a whole, we’re building agile solutions with a mix of silo applications, (micro)services applications and a set of building blocks provided by the iPaaS and aPaaS platforms we use. And in the Microsoft world, we’re talking about Azure then.

In those environments, to me it does not make sense anymore to build new solutions with legacy middleware. To me, implementing a BizTalk Server on-premises (or in IaaS for that matter) just does not make sense anymore, in a greenfield environment. Of course, when you have already invested lots of time and money in an on-prem BizTalk based ESB, you should leverage that. And those environments will keep on running for years to come. But when it comes to new integrations, why use BizTalk? I can really only come up with 1 reason, and that is: “I really don’t want my integration middleware to be running in the cloud” (for whatever reason). But that decision does not have anything to do with technical capabilities.

Let me try and clarify that.

First of all, what we often hear is that it doesn’t make sense to use cloud integration middleware (iPaaS) if most of your systems run on-prem. Uh, why not? With an On-prem Data Gateway and an ExpressRoute connection, latency is not an issue. And talking about latency, the biggest latency in any integration built with BizTalk Server is the freakin’ MessageBox hops!

Second, integrating SaaS applications is not something else (from a location point of view) than integrating on-prem applications. Most SaaS applications don’t run on Azure. Even Office 365 doesn’t run on Azure. So, what is “the cloud” anyway? From an integration perspective, integrating Salesforce from within Logic Apps is the same as integrating an on-prem SAP system when it comes to location dependencies or preferences!

Third, legacy integration is not really that much easier with “out-of-the-box adapters”. It’s fairly easy to either use the On-prem Data Gateway or create a custom API app to talk to the legacy application. Most of the times, the number of interfaces is fairly limited. And, in a modern API based integration world, broad transaction scopes are not used anymore, so relying on idempotency and compensation logic is much more the norm. These type of interfaces are really easy to build as an API app.

Apart from these three very important reasons, the last important point I want to make is this: We have been building monolithic ESBs. It is very hard to deploy individual orchestrated task services because of all the dependencies in BizTalk Server. BizTalk Server has in fact become a monolith (or makes it very hard not to implement monolithic solutions) which in most cases is very hard to manage and monitor. iPaaS makes it much easier to deploy, manage and monitor integration solutions. With ARM, Application Insights, Azure Monitor and of course the Azure portal, it has become really manageable. And as soon as Service Map also makes it to PaaS, we’ve really evolved into a much more mature iPaaS than BizTalk Server has every been. I’m really a fan and don’t see any reason anymore why we should implement new integrations on-prem!

iPaaS what else

Cheers, Gijs

 

 

iPaaS should not become your Trojan Horse

I’m currently involved as a cloud architect at an insurance company, working on the hybrid cloud reference architecture. They are making a move from a hosted environment to a hybrid cloud. And cloud engineers are working on the detailed designs for networking, storage, subscriptions, identity & access and integration. Integration between the private cloud and the public cloud as well as enterprise integration and B2B integration.

The customer is currently using BizTalk Server for enterprise- and B2B integration. The hosted environment is managed by a 3rd party as well, which means that the customer’s IT department only focuses on the functional management of applications. This also applies to BizTalk Server. New services are provisioned by filling out a form, waiting a couple of days or weeks and getting access to the new service. All is well so far, except for time-to-market, cost and being able to make use of the latest and greatest technologies.

Hybrid cloud is needed to shorten the time-to-market and innovating business processes and at the same time decrease IT spend on infrastructure. The whole thing about (hybrid) cloud is the on-demand characteristics and also being able to move away from traditional solution development to agile solution development, deployment and operations. Because provisioning is so fast, solution development can become much faster as well.

This is not an easy endeavor I can tell you.

Apart from the organizational aspects (devops is a topic on its own; is your organization ready for it?), the constantly evolving cloud and at the same time the constantly applied pressure from the business (“hey, we now have shorter time-to-market; let’s see it!”) makes life not really easy for the IT folks. We’re working on the reference architecture, but in the meantime several projects are underway to deliver cloud based solutions. Some SaaS, some PaaS and some we-don’t-really-know-what-kind-of-aaS. Every day we run into issues with regard to security and governance aspects. You’re really going to store customer senstive data in a NoSQL database running on a public facing linux box? Let’s think a little bit more about that. Refactoring the (hybrid) cloud solutions that have been delivered so far is the first thing we have to do once the reference architecture and the detailed hybrid cloud designs are done. That, and making sure the organization can actually cope with hybrid cloud deployments, management and governance.

In the “good old hosting days”, security was designed, applied and governed. Processes were in place and everything worked fine. Today however, checking or unchecking a single box in the Azure portal can have quite the impact. Suddenly, data leaks are possible. And we thought all was covered. Not.

Infrastructure as Code (which not only applies to IaaS but also to PaaS), is a mandatory thing. Clicking in a portal should be avoided. Cloud resources have to be deployed and managed by means of code. Versioned code. Code that has been reviewed and tested. Since full DTAP environments are rapidly becoming a thing of the past, your DTAP process has to be in place pretty well in order to prevent screw-ups with production data.

Why is the title of this blog post about iPaaS (Azure Logic Apps, API Apps, Functions, API Management, Service Bus) and a Trojan Horse specifically? Integration is at the core of everything when it comes to hybrid cloud. All aspects of the hybrid cloud architecture are touched here. Before you know it, things are tried out and put in production with all the potential risks as a result. Let’s protect ourselves from that.

Four things are important here:

  1. Have a reference architecture for hybrid cloud. New rule apply here! Hybrid cloud is not private cloud. This should at the very least contain the architecture principles and high level requirements that apply to all hybrid cloud solutions deployed. Example principle “Passwords should be stored in a safe place”. High level requirement resulting from that “Passwords used in scripts and code should be stored in Azure Key Vault”.
  2. Document the Solution Building Blocks. Azure is box of legos. Make sure that you know how to use all those building blocks and make sure that everybody knows the rules about how to use them in which scenario. Solution Building Blocks are not evil, but necessary artifacts. When do you use SQL Database, Blob, DocumentDB? How does security relate to these choices?
  3. Hybrid cloud needs hybrid service management. Make sure your IT service management sees your private cloud, hosted cloud and public cloud as one hybrid cloud and is able to manage that.
  4. Design and apply the right level of governance. Architectures, principles, requirements and solution building blocks are completely worthless if you don’t make sure they are actually used (in the right way). Peer reviews. Signed-off solution designs. Random inspections. These are all necessary things that you should cater for in your devops teams.

And remember, these things apply to your organization, but also to your IaaS, PaaS and SaaS solution vendors.

Let’s keep the Trojans out of your hybrid cloud!

Cheers, Gijs

Integration is just one of the skills needed

In modern enterprises, business solutions are built by agile teams. Agile teams are by design multi-disciplinary. The Product Owner is responsible for the product backlog and the team(s) build and implement the stuff needed. In these modern enterprises, the need for an innovation & differentiation layer on top of systems of record is an absolute necessity to enable shorter time-to-market of these solutions and, in some cases, digital transformation.

In the Microsoft world this innovation & differentiation layer is basically provided by Microsoft Azure and Office 365. The first one is needed for the (big) data, business intelligence, integration, process execution & monitoring and web & mobile UX capabilities, and the latter one for the collaboration and document handling capabilities. In Microsoft-centric application landscapes, Dynamics 365 will be the way to go for your systems of record for customer interaction (CRM) and resource planning (ERP). In the future, whenever you’re ready as an enterprise, the whole two-speed or bi-modal approach will be a thing of the past, and the cloud infrastructure will enable just one-speed IT; full speed! But that will still take several years before it has become a reality that most enterprises can deal with. Because it’s not only an IT thing, but more an organizational thing (how do you keep on adopting these agile solutions; won’t we become tech-fatigue?).

Many skills are needed in the agile teams we deploy today. And when scaling the teams, a layer on top of the teams is needed to manage the portfolio and program aspects in a better way. The Scaled Agile Framework (SAFe) is a good example (I personally have experience with; I’m a SAFe Agilist :-)). Quite some enterprises have implemented this, or an alternative to this like LeSS (Large-Scale Scrum). This also facilitates DevOps at a larger scale.

Even in agile environments, enterprise architecture is still needed 😉 He or she is responsible for the overall architecture of the solutions that get built. Business architecture, information architecture and technical architecture are all still very important things. It defines the frameworks within which the solutions should be built. The agile teams work within these frameworks.

We also more and more see that agile teams focus on certain business domains. And that common practice within microservices architecture is that we don’t try to build stuff that can be reused by other teams as well. Unless we have one or two specific teams working on re-usable, enterprise wide functionality. It’s all about business value first.

Is the integration competence center something that is helpful in such an environment? Well, the role of the ICC will change. The ICC (just like any other CC) will not be involved as a “sleeping policeman” (pun intended) anymore. The ICC will basically be part of the enterprise architecture role, focusing specifically on the frameworks (to which the agile teams should adhere; comply or explain) and the enterprise wide functionality. Everything else will be solved by the agile teams. For specific domains. That way we are way more flexible and we still build re-usable enterprise wide stuff where absolutely needed.

Do integration projects still exist in the future?

I think not, or only in a very limited number of cases. Integration is just part of business solutions and should also be treated like that. The API developer, the UX developer, the DBA, the security expert, etc. They’re all integration capable team members. The commodity integration needs can be fully handled by the agile teams like that. And for the integration specials, like re-usable building blocks and patterns or the more difficult one-offs, we just involve the ICC, which probably is a separate agile team in the larger organizations. Their domain is cross enterprise. But they will not be roadblocks anymore, that used to slow down business solution development.

Will this change the way system integrators work? Absolutely. The more we can do to deliver complete agile teams (including big data, UX and collaboration folks), the better we can serve our customers! To become agile too, and shorten their time-to-market and maybe even transform and redefine their business models. As an integration specialist or integration team alone, you can’t do that. The folks that understand and can implement the whole Microsoft platform to deliver real business solutions are going to be the ones that enterprises will turn to…

Cheers, Gijs