Why blockchain should fail

It’s rather amazing what happened the last couple of years when it comes to blockchain. It all started with Bitcoin of course. At the time of writing this, the total market cap of Bitcoin is $1 trillion. When we look at Coinmarketcap today, there’s a total of more than 80 coins that have a market cap of more than $1 billion. and more than 360 coins that have a market cap of more than $100 million. Lots of fiat money is locked into these “projects”. Most of them basically being ponzi schemes, because the technology underneath those “projects” is mostly crap and the only ones making money are the ones that bought during the ICO (initial coin offering) or the ones that initiate “pump-and-dump” events. A new crypto sucker is born every day.

The whole idea behind Bitcoin and other blockchain based cryptos is “decentralization”. There’s no central organization responsible for how things are handled. Except of course when it comes to tokenomics (how many coins are there in circulation, how many can there be max and how many will be printed per period; the inflation) in most cases. When you look at most coins, there is no max number of coins. Which means inflation is potentially infinite. This is different from Bitcoin, which has a max number of 21 million. But what keeps evil people from just changing that? And what can you do about it?

As we all know by now, the “rareness” of coins is based on the difficulty of mining new coins. The way coins fluctuate (unlike the “stablecoins” like $usdc and $usdt) will make it very difficult for them to ever be used as digital money (except of course for buying a Tesla, according to Elon). Basically, people just gather them, hoping they will increase in value. Do you actually own a piece of the company behind the crypto? So, the case of the SEC against Ripple with regard to “is XRP a security?” is something to be watched carefully.

In the meantime bots rule the buying and selling of crypto. It’s such a coincidence that coins go up and down in groups. And lately we can even see that crypto moves along with stock markets. And because of all the derivatives, futures and all kinds of other “smart” products, including the “shorting” as we know it from traditional stock markets there’s really no way that the moving up or down does have anything to do with underlying fundamentals. It’s all FUD or hype.

And also in the meantime lots of cryptos, most notoriously $eth, are used as a platform by other cryptos and tokens and because the price of the “platform” coins goes up, transaction prices go through the roof. Basically making executing of transactions way more expensive than plain old credit card transactions.

The way I look at it is that just like with the gold rush in the last century, the only people really making money on crypto in a predictable and stable way are the “tool supplier” commercial exchanges such as Binance, FTX, Bitmex and quite some others.

But that’s all about crypto. How about the underlying blockchain technology, including the “smart contracts” running on them? When a smart contract fucks things up (because as a user I didn’t really understand it, or the “independent” auditor did a bad job auditing it) you are basically fucked. Nowhere you can turn to. That’s the downside of decentralization. Let’s say I want to borrow some $eth based on my $btc balance. And that I enter “90” in the number of days within which I will pay back. And the smart contract just automatically wires the $btc collateral to the other party after 80 days because I didn’t pay back yet? Nowhere I can turn to.

Of course there are more and more “governance” tokens that are implemented, that are basically used to democratically determine when and how things are implemented and executed. But we all know that democracy doesn’t always work well. If we’d held a referendum right now about getting rid of the Corona lockdown, it would problably pass with great majority. But would it be a good solution? And who guarantees how the things voted on are actually implemented? Lot’s of scams already exited. “Rug pull” has happened quite a lot lately. Gone are the coins, or at least gone down in value to (almost) zero.

And the blockchain itself? It’s just an energy consuming monster that replicates everything to every node. And it will not forget anything. And because they are so inefficient, lots of important data is in fact stored “outside” the blockchain, only the metadata is stored on the blockchain. That does not make sense at all. And how about GDPR / AVG? No one cares.

Apart from that there are issues with tokenization of digital assets, let alone “real-world” assets. The way we put things on the blockchain should also be certified. But how can you certify, without independent external auditors? For digital assets, the whole digital process for getting things on the blockchain should be 100% reliable. What if we log certain access to information on the blockchain, but we cannot guarantee identity? What if we log the the sale of a house or piece of land and we cannot digitally guarantee that the identity of the asset is correct? There are so many things that we still cannot digitally 100% prove. So, the same old “garbage in, garbage out” still applies, even on a blockchain.

Of course this is all about public blockchains. And I just don’t see use cases for consortium- or private blockchains.

Correct me if I’m wrong, but the current ways of thinking by the blockchain gurus is just plain stupid or at least misleading to the dumb folks (you and I) actually using it. I’ve read quite some “white papers” and “roadmaps” on lots of projects, and 99.9% are plain stupid and the other 0.1% forgot some important stuff that still “needs to be taken care of” but in fact they just cannot. We need a new ledger paradigm, including new ways of guaranteeing smart contract behavior AND we need 100% reliable tokenization and oracles. But is that actually possible, without third party (government controlled) institutions? I’m not sure. Maybe open source AI can help? But for now, I just don’t trust it at all. Decentralized or not. They are all just underworld casinos and the house always wins.

iPaaS should not become your Trojan Horse

I’m currently involved as a cloud architect at an insurance company, working on the hybrid cloud reference architecture. They are making a move from a hosted environment to a hybrid cloud. And cloud engineers are working on the detailed designs for networking, storage, subscriptions, identity & access and integration. Integration between the private cloud and the public cloud as well as enterprise integration and B2B integration.

The customer is currently using BizTalk Server for enterprise- and B2B integration. The hosted environment is managed by a 3rd party as well, which means that the customer’s IT department only focuses on the functional management of applications. This also applies to BizTalk Server. New services are provisioned by filling out a form, waiting a couple of days or weeks and getting access to the new service. All is well so far, except for time-to-market, cost and being able to make use of the latest and greatest technologies.

Hybrid cloud is needed to shorten the time-to-market and innovating business processes and at the same time decrease IT spend on infrastructure. The whole thing about (hybrid) cloud is the on-demand characteristics and also being able to move away from traditional solution development to agile solution development, deployment and operations. Because provisioning is so fast, solution development can become much faster as well.

This is not an easy endeavor I can tell you.

Apart from the organizational aspects (devops is a topic on its own; is your organization ready for it?), the constantly evolving cloud and at the same time the constantly applied pressure from the business (“hey, we now have shorter time-to-market; let’s see it!”) makes life not really easy for the IT folks. We’re working on the reference architecture, but in the meantime several projects are underway to deliver cloud based solutions. Some SaaS, some PaaS and some we-don’t-really-know-what-kind-of-aaS. Every day we run into issues with regard to security and governance aspects. You’re really going to store customer senstive data in a NoSQL database running on a public facing linux box? Let’s think a little bit more about that. Refactoring the (hybrid) cloud solutions that have been delivered so far is the first thing we have to do once the reference architecture and the detailed hybrid cloud designs are done. That, and making sure the organization can actually cope with hybrid cloud deployments, management and governance.

In the “good old hosting days”, security was designed, applied and governed. Processes were in place and everything worked fine. Today however, checking or unchecking a single box in the Azure portal can have quite the impact. Suddenly, data leaks are possible. And we thought all was covered. Not.

Infrastructure as Code (which not only applies to IaaS but also to PaaS), is a mandatory thing. Clicking in a portal should be avoided. Cloud resources have to be deployed and managed by means of code. Versioned code. Code that has been reviewed and tested. Since full DTAP environments are rapidly becoming a thing of the past, your DTAP process has to be in place pretty well in order to prevent screw-ups with production data.

Why is the title of this blog post about iPaaS (Azure Logic Apps, API Apps, Functions, API Management, Service Bus) and a Trojan Horse specifically? Integration is at the core of everything when it comes to hybrid cloud. All aspects of the hybrid cloud architecture are touched here. Before you know it, things are tried out and put in production with all the potential risks as a result. Let’s protect ourselves from that.

Four things are important here:

  1. Have a reference architecture for hybrid cloud. New rule apply here! Hybrid cloud is not private cloud. This should at the very least contain the architecture principles and high level requirements that apply to all hybrid cloud solutions deployed. Example principle “Passwords should be stored in a safe place”. High level requirement resulting from that “Passwords used in scripts and code should be stored in Azure Key Vault”.
  2. Document the Solution Building Blocks. Azure is box of legos. Make sure that you know how to use all those building blocks and make sure that everybody knows the rules about how to use them in which scenario. Solution Building Blocks are not evil, but necessary artifacts. When do you use SQL Database, Blob, DocumentDB? How does security relate to these choices?
  3. Hybrid cloud needs hybrid service management. Make sure your IT service management sees your private cloud, hosted cloud and public cloud as one hybrid cloud and is able to manage that.
  4. Design and apply the right level of governance. Architectures, principles, requirements and solution building blocks are completely worthless if you don’t make sure they are actually used (in the right way). Peer reviews. Signed-off solution designs. Random inspections. These are all necessary things that you should cater for in your devops teams.

And remember, these things apply to your organization, but also to your IaaS, PaaS and SaaS solution vendors.

Let’s keep the Trojans out of your hybrid cloud!

Cheers, Gijs

Atomic Integration

Atomic Integration, I like the term. And I like the concept. Of course it has disadvantages, but don’t the advantages outweigh these? Let’s explore.

When the Integration Monday session “The fall of the BizTalk architect” (by Mikael Sand) was announced I was immediately triggered. It was the combination of the title and the presenter (Mikael often rocks) that made me create a note to self to watch it later on. And finally I did. And it triggered me to write this blog post.

For years we have been telling our customers that spaghetti is bad and lasagna is good. And we’ve also been telling them that re-use is the holy grail. And that by putting a service bus in the middle that everything becomes more manageable, integrations have shorter time to market and the whole application landscape becomes more reliable.

But at the same time, we see that re-use is very hard to accomplish, there are so many dependencies between solutions and that realizing this in an agile manner is a nightmare if not managed meticulously. Especially if we need to deliver business value quickly, talking about createing stuff for later re-use and having depencies on other teams is a hard sell to the product owner and thus the business.

Another thing that is really hard with any integration archtitecture and the accompanying middleware with its frameworks that have been built on top of the actual middleware (and thus becoming a part of that middleware) is ownership. And that is a two-headed beast. First of all, ownership of the frameworks that run on top of the middleware, and second the ownership of the actual integrations.

The first one is already a hard nut to crack. Who pays for maintaining these frameworks? Does it get financed by projects? Very hard to sell. Does it have a separate (business) owner and funds? Never seen that before and it probably wouldn’t work because the guy that pays most gets his way and it doesn’t necessarily mean that the framework will be the best usable framework of all times.

The second one is even harder to manage. Who is typically the owner of an integration. The subscriber and thus the receiver of information? Or is it the sender a.k.a. the publisher of the information? Who pays for it? Is that the owner? And will he be managing it? And what happens if the ingration makes use of all kinds of other, re-usable custom features that get changed over time and in which the actual owner of the ingration is not interested at all?

Why indeed not do more copy-and-paste instead of inherit or re-use? The owner of the specific integration is completely responsible for it and can change, fork and version whatever he likes. And as Mikael says in his presentation: TFS or Visual Studio Online is great for finding out who uses certain code and inform them if a bug has been solved in some copied code (segment). And of course we still design and build integrations according to the well known integration patterns that have become our best friends. Only, we don’t worry that much about optimization anymore, because the platform will take care of that. Just like we had to get used to garbage collectors and developers not knowing what memfree() actually means, we need to get used to computing power and cheap storage galoring and therefore don’t need to bother about redundancy in certain steps in an integration anymore.

With the arrival of cloud computing (more specifically PaaS) and big data, I think we now get into an era when this actually is becoming possible and at the same time manageable. The PaaS offerings by Microsoft, specifically Azure App Service, are becoming an interesting environment quickly. Combined with big data (which to me in this scenario is: Just save a much information about integration runs as possible, because we have cheap storage anyway and we’ll see what we’ll do with this data later), runtime insight, correlation and debugging capabilities are a breeze. Runtime governance: Check. We don’t need frameworks anymore and thus we don’t need an owner anymore.

Azure App Service, in combination with the big data facilities and the analytics services are all we need. And it is a perfect fit for Atomic Integration.

Cheers, Gijs

On the desperate need for (micro)services governance

Am I becoming an old integration cynic (or a Happy Camper as @mikaelsand so sarcastically called me a couple of weeks ago during an #IntegrationMonday)? I don’t think so, but correct me if I’m wrong, please!

My current view on the latest Microservices platform craze is the following: Old wine in new bottles. Yeah, yeah, yeah, we can now run on the latest SQL Server (but not SQL Azure and not AlwaysOn) and run things in different containers. And we have the 3rd or 4th incarnation of a business rules composer that does exactly the same but in a new frame. And yet another way of composing services and building transformations. And we store and exchange things using JSON. Great! 😉 But to me, this all still falls in the category “platform alignment” or “platform modernization”. This is not innovation. This is not adding great features that help our customers solve real life issues in a much better way. This is like adding airbags and anti-lock brakes (that any car has today), but not building a car that’s 60% software and really innovative (like Tesla). This is just making sure that you use the latest technologies to be able to run your basic functions, but even these are not providing the bare minimum.

We are still not fixing what we really crave for: end-to-end design- and runtime governance. And better productivity. But that’s a couple more bridges too far right now. I know that governance is a boring word. But I also know that my customers desperately need it. How else can you manage and monitor your more and more complex integration architecture? When only running on-prem integration, it’s relatively easy. But we lack the real tools here already. We need 3rd party tools to manage and monitor our environment. When adding SaaS, Mobile Apps and IoT to the mix, it becomes more and more complex. Let alone when adding Microservices.

My customers want the following. Today!

  1. An easy way to register and find services
  2. An easy way to compose services (aligned with business processes)
  3. A proper way to handle configuration management and deploy & manage those services (and really understand what’s deployed)
  4. Real end-to-end insight in composed services, including root cause analysis

For enterprises that have a “Microsoft unless” policy, today it’s impossible to use only Microsoft technology for integration purposes. They need 3rd party tools for management and monitoring. Tools that only solve part of the design-time and part of the run-time governance issues we have. Tools that can’t be used end-to-end. Tools that are built by companies we don’t know will still exist in 2 years time. Tools that are not Microsoft tools. Tools of which we even need more than one to provide for end-to-end, fully manageable services.

When, for example, my on-prem BizTalk Server exposes a back-end SOAP or other legacy service as an externally faced REST service (this is called mediation) we can more or less manage and monitor that. In a quite limited way still, because there is no real repository that can register my SOAP and REST services for solution developers to be able to find and start using them. And BAM needs quite some config. But, the major problem is that as soon as the REST service is consumed by API Management which exposes it to Apps and Portals and other consumers, we have no way to find out what happens anymore; no end-to-end tracking & tracing  is possible. Unless we built it ourselves. By coding. Yes, you heard me: Coding; a 20th century concept of building software solutions. So 5 minutes ago!

What we need is platform development (by any vendor by the way, including IBM, SAP, Google, etc.) that takes supportability first seriously. This means: first think about how your customers are going to be able to manage & monitor this when going in production before you start building features. Really cater for management & monitoring. Not build software that cannot or only poorly be managed and monitored. And once we have that, we’ll be more than happy to get better design and development productivity as well. But for now, being able to get rid of the bulk of the cost of any integration environment, namely supporting it on a day-to-day basis (the part of the iceberg that’s under water), would be a good starter indeed.

Thanks for listening. Please spread the word. 🙂

Cheers, Gijs

p.s. I think Microsoft are still way ahead of most other vendors. Pure-play integration vendors don’t have a bright future. It’s a platform game.