Posts by gindafield

CTO @Motion10, former 7 times Microsoft MVP Innovate, don't imitate!

Azure Stack use cases

On January 29th, Azure Stack went into Technical Preview.

Update July 14th 2017: Azure Stack is now generally available.

Having discussed Azure Stack with several of my customers in the past weeks,¬†I’ve come to the following list of potential use cases for it (in no particular order):

  • Private cloud environment (duh!): Host your own private cloud with (more-or-less) the same capabilities as the Microsoft Public Azure Cloud. You can maybe even organize visits to your cloud data centers ūüôā
  • On-ramp to public cloud: Gently try Azure in your own private environment before you migrate (parts of your) solutions to the public cloud without having to re-architect!.
  • Capex development & test environment: At fixed capex cost, give your development team an environment in which they can code and test. Then deploy it to the public cloud (or private, or hybrid; whatever you want)¬†without having to re-architect!
  • Hybrid cloud: Create hybrid cloud solutions, based on the same Azure architecture. Use your private cloud part of the hybrid architecture for stuff you don’t want in the public cloud. Use public cloud for all stuff that can go there. Mix-and-match,¬†without having to re-architect!
  • Cloud bursting: Run things mainly in your private cloud, and use the public cloud to offload (parts of your) workloads to when there are (seasonal) peaks in your load.
  • Exit strategy insurance: Have the comforting¬†feeling and insurance that when you for some reason or the other don’t like using the Microsoft Public Azure Cloud anymore, you can just migrate your solutions back to your private cloud¬†without having to re-architect!

Just my $0.02 of course.

Cheers, Gijs

Anarchytecture

Cloud. A euphemism for the Internet. Internet. A virtual community without any governance. Anarchy! How can we still make sure that this keeps on working?

For many people this may come as a surprise, but for most of us it won’t: The physical world has been divided in countries with each having their own government / governance in place and the Internet is one big global virtual world. The Internet gurus are all convinced that this should never be tampered with. The Internet is self-governing.

Of course there are some principles to which the Internet adheres and within which’ boundaries it further develops. Global agreement on for example domain architecture was necessary. The right choices that have been made in the beginning turned out to be very valuable. Once in a while we do come to a grinding halt however. Think about the limitations of IP numbering. For once and for all let’s not forget: when you, as a developer, think “uhm, that should be big enough”, just multiply it with at least a 1000 from now on. Just as a side note. ūüôā

Architecture has been invented to be able to build a foundation that can keep up with future developments on it. Otherwise, stuff implodes. But especially in IT, architecture means “being able to cope with change”. A good architecture can prove its value for years to come. Until the next possible paradigm shi(f)t.

So, is architecture needed? Yes. Can it work in an anarchistic environment like the Internet? For sure. At least, when you¬†take into consideration¬†that you can always be surprised by rebels who don’t care about your architecture and just spray your artwork with graffiti. Or build a terrace on your flat rooftop. By preparing for the worst, at acceptable cost, an architecture can sustain fine. But, like I said before, a next big invention in IT can for sure start shaking the foundations. For example, a bit that can be 1 and 0 at the same time is something we have to carefully think about before we start deploying solutions built on that paradigm massively.

I’m convinced that with the current methods of developing distributed solutions we are well on our way to build¬†fine, architecturally sound applications that can be deployed in the anarchistic clouds. Microservices¬†architecture is an excellent way to thrive in this chaos. Microservices architecture¬†is anarchytecture! So, what are DevOps then, actually?

Cheers, Gijs

p.s. Thank you Skunk Anansie for the inspiration for this blog post and hence its title.

The importance of API design-time governance

We all know the importance of design-time governance in SOA architectures. But how about design-time governance in the new API economy? An economy that’s going to boom. That is already booming while you were sleeping.

People tend to forget about mistakes made in the past and make the same mistakes again during paradigm shifts. New paradigm, same mistakes. Are we all that stupid? Or is it a generational thing (nah, we know better)? Or cultural (let’s first concentrate on happy flows and get the stuff working asap; now let’s put it in production :-)).

The same is happening now. I foresee¬†API spaghetti. And, it’ll get far worse this time. You know why? APIs are for the business. SOA was for techies. Techies tend to at least think a little bit about the longer term behaviours. At least, some of them. The real developers I call them. But business people, responsible for exposing and consuming APIs in order for them to be able to build agile business processes that enable quick wins and sometimes strategic benefit? Will they worry about re-usability? Will they worry about granularity of APIs? Sustainability?¬†I don’t think so.

So, we will still need an ICC (integration competence center) that takes care of guidelines around how to build and use APIs, how to expose them, how to document them and how to make them re-usable and findable. And¬†secure and sustainable.¬†Maybe we shouldn’t call it ICC anymore. But call it “ACC”. The API Competence Center.

What would such an ACC do? Here’s some guidelines:

  • Design, document¬†and¬†guide the use of¬†design principles (still a must read: “Principles of Service Design” grandmaster Erl)¬†for development of APIs;
  • Help the business make the right, sustainable¬†business process solution decisions;
  • Make sure that the right Application Platform as a Service functions are used;
  • Educate the organisation on the right development and use of APIs;
  • Make sure we indeed are building Microservices and not silos.

Let’s not put this stuff in documents, but in the API Management portal. With managed meta data. So that we can indeed find our APIs. And we can use the information at our fingertips, not buried somewhere in a document.

I’d say: If you’re serious about API’s, let’s get serious about the ACC.

Cheers, Gijs

Safe Harbor flip-thinking

European privacy laws turned out to be incompatible with the laws in the United States. Safe Harbor can therefore not be applied in European countries and now us Europeans are¬†all obliged to move our data to European clouds. Aren’t we forgetting about the real important stuff here? Aren’t there more important aspects to privacy related issues? Such as identity theft?

Personal information gets stolen everyday and the incidents we hear about in the news are getting bigger and bigger. Millions of records are stolen each day. At the moment the amount of hacked data is so enormous, that such stolen information is only worth about $1 per person. And in the meantime we worry about Safe Harbor…

One of the Safe Harbor privacy principles is:

  • SECURITY:¬†Organizations creating, maintaining, using or disseminating personal information must take reasonable precautions to protect it from loss, misuse and unauthorized access, disclosure, alteration and destruction.

I think the main issue here is the word¬†reasonable. Time and again it proves to be¬†very easy to illegally retrieve information from companies’ computer systems. A number of aspects makes this so easy:

  1. Forced access to systems is quite simple
  2. Data has been stored in such a way that you can easily recognize it as personal data
  3. Organizations store too much information and store it in an unprotected way

Apart from this, there are large issues with authentication dealing with most organizations. It turns out to be very easy to arrange important issues by just sending a copy of an ID, a credit card number or social security number. We are all changing our complex passwords every month, but in the meantime your system can easily be hacked by just getting access through badly protected services running in your domain, secured with default admin passwords that never get changed. We all do payments with the same 4 digit pincodes belonging to our debit or credit cards. And getting access to important information like tax records is badly secured as well, using passwords that you probably haven’t changed in years.

We are fooling ourselves with regard to authentication. The illusion of great security. The mere fact that the NSA can confiscate European data because it has been stored in US clouds is not the real problem. The fact that the NSA can so easily be hacked is the real problem. And the fact the the data is so easily accessible and understandable. Besides encryption, data should also at the very least be obfuscated.

Apart from that it is not only important to know with which cloud provider you’re doing business, but more so with which “shop”. And how much of your private information you give away. When you register yourself with a website, and that website is hosted on Amazon or Microsoft cloud servers, that is not a guarantee that your data is safe and stays private. The architecture of the web solution is of much greater interest. It can even be the case that your social security number or credit card number will be stored in a cookie, which can very easily be accessed by unauthorized persons. If the web developer has coded it like that because he thought it was a good idea, nobody will find out (in time). Only until¬†it hits the newspapers…

As a civilian you really don’t have a clue where your data is stored and how safe it is. Safe Harbor is not going to change anything about that.

Maybe the best solution will be the fact that stolen privacy sensitive information is becoming less and less expensive to buy by criminal organizations. It’s almost not worth the bother of hacking a system anymore. If¬†governments would only develop good laws to protect civilians, so that all civilians can for example have guaranteed access to health insurance against standard fees no matter how they behave according to their timeline on Facebook, privacy sensitive information is not relevant anymore. If obtaining¬†a new bank account or mortgage can only be done by means of face-to-face authentication, leaking privacy related information is not an issue anymore.

If you always have the legal right to be able to prove that you have¬†not done something, for example transferring a large amount of money somewhere, then all is alright. In today’s world, it is very easy to do that, with all the digital¬†breadcrumbs we leave behind all day.

The solution lies in immutable architecture and Big Data. If everything will be stored, using distributed systems, and the relationships between all these data can be determined on an ad hoc basis instead of creating the data models up front, the burden of proof cannot be falsified and everyone is always able to prove you have not done something.

The problem of leaking or confiscating privacy sensitive data will solve itself… Devaluation of privacy information is the answer!

Cheers, Gijs

Atomic Integration

Atomic Integration, I like the term. And I like the concept. Of course it has disadvantages, but don’t the advantages outweigh these? Let’s¬†explore.

When the Integration Monday session “The fall of the BizTalk architect” (by Mikael Sand)¬†was announced I was immediately triggered. It was the combination of the title and the presenter (Mikael often rocks)¬†that made me create a note to self to watch it later on. And finally I did. And it triggered me to write this blog post.

For years we have been telling our¬†customers that spaghetti is bad and lasagna is good. And we’ve also been telling them that re-use is the holy grail. And that by putting a service bus in the middle that everything becomes more manageable, integrations have shorter time to market and the whole application landscape becomes more reliable.

But at the same time, we see that re-use is very hard to accomplish, there are so many dependencies between solutions and that realizing this in an agile manner is a nightmare if not managed meticulously. Especially if we need to deliver business value quickly, talking about createing stuff for later re-use and having depencies on other teams is a hard sell to the product owner and thus the business.

Another thing that is really hard with any integration archtitecture and the accompanying middleware with its frameworks that have been built on top of the actual middleware (and thus becoming a part of that middleware) is ownership. And that is a two-headed beast. First of all, ownership of the frameworks that run on top of the middleware, and second the ownership of the actual integrations.

The first one is already a hard nut to crack. Who pays for maintaining these frameworks? Does it get financed by projects? Very hard to sell. Does it have a separate (business) owner and funds? Never seen that before and it probably wouldn’t work because the guy that pays most gets his way and it doesn’t necessarily mean that the framework will be the best usable framework of all times.

The second one is even harder to manage. Who is typically the owner of an integration. The subscriber and thus the receiver of information? Or is it the sender a.k.a. the publisher of the information? Who pays for it? Is that the owner? And will he be managing it? And what happens if the ingration makes use of all kinds of other, re-usable custom features that get changed over time and in which the actual owner of the ingration is not interested at all?

Why indeed not do more copy-and-paste instead of inherit or re-use? The owner of the specific integration is completely responsible for it and can change,¬†fork and¬†version whatever he likes. And as Mikael says in his presentation: TFS or Visual Studio Online is great for finding out who uses certain code and inform them if a bug has been solved in some copied code (segment). And of course we still design and build integrations according to the well known integration patterns that have become our best friends. Only, we don’t worry that much about optimization anymore, because the platform will take care of that. Just like we had to get used to garbage collectors and developers not knowing what memfree() actually means, we need to get used to computing power and cheap storage galoring and therefore don’t need to bother about redundancy in certain steps in an integration anymore.

With the arrival of cloud computing (more specifically PaaS)¬†and big data, I think we now get into an era when this actually is becoming possible and at the same time manageable. The PaaS offerings by Microsoft, specifically Azure App Service,¬†are becoming an interesting environment quickly. Combined with big data (which to me in this scenario is: Just save a much information about integration runs as possible, because we have cheap storage anyway and we’ll see what we’ll do with this data later), runtime insight, correlation¬†and debugging capabilities are a breeze. Runtime governance: Check.¬†We don’t need frameworks anymore and thus we don’t need an owner anymore.

Azure App Service, in combination with the big data facilities and the analytics services are all we need. And it is a perfect fit for Atomic Integration.

Cheers, Gijs

Is Azure App Service about Integration?

Today and tomorrow is the BizTalk Summit in London (#BizTalkSummit2015). BizTalk is about integration. The summit is about the launch of BizTalk Services as part of the Azure App Service offering.¬†Since the first talks about offering a Microservices platform during the¬†Integration Summit in Redmond in December last year, I’ve started thinking¬†about and struggeling with integration versus application development.¬†There has traditionally always been a fine line between application development and integration. When implementing pure, messaging only based transformation, routing and protocol adaptation, we’re basically talking about integration. This is used to convey information from one application to the other in order for them to work with less or¬†completely without manual copying of information.¬†When adding orchestration and business rules, master data services, apps and portals access¬†we’re more and more talking about application development. The application mostly being a composition of functions provided by silo’d application and SOA services. And sometimes adding more modern stuff like REST services to the composition.

Gartner has released its latest Magic Quadrant on Enterprise Application Platform as a Service earlier this year. To them, Azure App Service is (part of) an aPaaS platform. They define aPaaS as:

“Application infrastructure functionality, enriched with cloud characteristics and offered as a service, is platform as a service (PaaS). Gartner refers to it more precisely as cloud application infrastructure services. Application platform as a service (aPaaS) is a form of PaaS that provides a platform to support application development, deployment and execution in the cloud. It is a suite of cloud services designed to meet the prevailing application design requirements of the time, and, in 2015, includes mobile, cloud, the Internet of Things (IoT) and big data analytics innovations.”

Especially the part “a platform to support application development, deployment and execution in the cloud” triggered me to write this blog post. When looking at the functionalities that should be provided by such platforms, integration is of course one of the capabilities. And integration comes in many forms. But, are these capabilities features that should be used to recreate legacy ways of application integration? Or are these functionalities provided to build moderns¬†applications, based on – yes I dare to use the term again, even though Microsoft stopped using it since last month when talking about the Azure App Service – Microservices?

To me it is clear, Azure App Sevice is an application platform providing the tools and runtime services to quickly build modern, distributed solutions. Modern, because it uses APIs. APIs should be used, because Data Services, 3rd party APIs and SaaS Applications are built on that paradigm. And that paradigm facilitates agile solution development and quick time to market. Creating a modern, distributed, composite solution is using all the different building blocks provided by the aPaaS platform and the SaaS applications, Data Services and 3rd party APIs (for example the ones listed in programmableweb.com or the Azure Marketplace).

In my opinion, we should only use public cloud based legacy integration paradigms for hybrid purposes. That is, bridging on-prem soap services to cloud based API integration layers. We should not bother about providing traditional EAI and B2B like integration paradigms.¬†Forget about Batching EDI messages, transaction scopes¬†and stuff like that. I’d say, just boycot that in the public cloud.¬†Just rely on idempotency of the independently deployed APIs. And provide an on-premises server solution for the¬†legacy integration stuff. We shouldn’t be building ESBs in aPaaS. We should be building distributed solutions based on modern paradigms.

Let’s use this hammer (or nail gun, as Mikael Sand called it :-)) to handle nails. And let’s all figure out quickly what these nails can look like and what we can build with them. Maybe we should first find consensus on anti-patterns for Azure App Service within the large community we have.

Cheers, Gijs

Is NoESB a euphemism for “more code in apps”?

Yesterday I tweeted the title of this blog post. But at that moment I didn’t know yet that this was becoming the title of a blog post. And now it is :-).

I recommended the use of an ESB to a customer. Oh dear. Before I started working on that recommendation I organized workshops with all kinds of folks within that organization and thoroughly analyzed their application landscape, information flows, functional and non-functional requirements. The CIO had told me at the start that he was going to have the report reviewed by a well known research firm. And so happened.

The well known research firm came back with a couple of comments. None too alarming. Except for one: the advice I gave, to implement an ESB was “old fashioned” because ESBs are overly complex centralized solutions and therefore we should consider using API Management. Whaaaaaat?

First of all, let me explain that I have found out that when we talk about an ESB, quite some people automatically think that that is about on-premises software. That’s a wrong assumption. ESBs can run in the cloud as well. As a PaaS¬†solution indeed.

Second, an ESB can also be used to integrate external apps and not only apps that are within your own “enterprise”.

Third, API Management is not an integration tool. With regard to integration capabilities, it’s merely¬†a mediation tool. With strong virtualization, management and monitoring facilities, for both admins and developers. Developers that use your APIs to create new apps.

The well known research firm also said that NoESB is a new thing that should be considered.

I think that we are just moving the complexity elsewhere. And the bad thing is that we are moving it¬†in places (mostly apps)¬†that are developed by people who don’t understand integration and everything that comes with it. People¬†who have issues with understanding design patterns most of the times, let alone integration patterns. People who most of the times don’t have a clue about manageability and supportability. People who often say things like “I didn’t expect that many items to be in that¬†list.” or “What do you reckon, he actually uses this app with mimimum rights. We haven’t tested that!” or “A 1000 should be enough.”. Famous last words.

Of course the ESB has caused lots of troubles in the past and it lots of times still does. But that’s not the fault of the ESB. That’s the fault of the people implementing the ESB, using the wrong design patterns. Or using ESB technologies that do not (fully) support design patterns. Or ESBs that are misued for stuff that should have been implemented with other technologies and just bridged using federation.

I’m a bit worried about the new cloud offerings currently becoming popular rapidly, which are still in beta or even alpha¬†stage. Cloud services that are offered for integration purposes, which can easily be misused to create horrific solutions. Not adhering to any design pattern at all. And not guiding the powerusers (DevOps) in applying them either. Do DevOps even know about design patterns? (sorry if I offend anyone here).

I’m afraid that in the near term, lots of new distributed¬†apps will be created with brand new, alpha or¬†beta stage technologies by people who are neither developers, nor operations folks. People who know little about architecture and patterns and who will create fat apps with lots of composite logic. And integrate with NoESB. Disaster is coming soon. Who’ll have to clean up that mess? Who you gonna call? FatApp¬†Busters! ūüôā

Cheers and happy hacking, Gijs