A Darwinian view on algorithms

A common notion these days is that “data is the new gold”. My opinion on this is different. Data are just common natural resources. Not that scarce and therefore not worth that much. But when combined in a smart way, it can become gold. So, what is the new gold is algorithms.

“Algorithms are the new gold, not data.”

Companies who found this out, after they initially still quite naively started providing neat features to end-users are Facebook and Google. But, by now they have become behavior manipulation empires (see Jaron Lanier’s great TED talk on this). The algorithms these companies apply are opaque. We don’t have a clue what’s happening there and to be frank, sometimes they probably don’t know themselves either.

As has been seen in the news the last couple of weeks, these algorithms can turn into real Frankensteins sometimes. But I presume lots of smart data scientists work at these companies and they know what they do. Hopefully.

But what will happen when not only the data that is used as input can be manipulated (for example by generating fake data through the use of trolls, from Russia or wherever), but also the algorithms? What if these algorithms can be injected with malicious code? And produce results that better fit the people or organizations that put the code there?

We get manipulated by algorithms more than we know and probably is good for us. The singularity is already here, but we don’t want to admit it yet. By far the best example is that probably more AI generated babies are born these days than the good old fashioned true biological ones.

What if the evil people can influence these algorithms? And start generating “odd” (but “more desired”) couples by influencing Tinder or other platforms that people use to find potential life partners? What if during WW2 big data would have been available and algorithms could have been manipulated? Would we all have spoken German now? That’s a pretty scary thought. And there are plenty of dictators or dictator-like folks in high places right now that could do this.

So, what’s next? How can we protect the algorithms? More quickly (and openly) get to know when systems have been hacked? Is open source the way to go? Shouldn’t we let these platforms get too big? Do they need more inside and outside governance? These will be the major topics we have to deal with in the short term, or otherwise evolution will not be going slowly and gracefully anymore and Darwin’s insights will be obsolete soon.

Cheers, Gijs