A big issue in connection with digitization has to do with algorithms that can control processes and opinions in the background. Do algorithms have prejudice? What do you think?
Ranga Yogeshwar: We need to understand that we are moving into an era in which many decisions that we – as humans – have made in the past are being transferred to mathematical models that make decisions by correlating data. Why do I say that? The basic idea of the Enlightenment is causality, cause and effect. A great deal is based on this. Our legal system is based on this, for example. When someone is guilty, then this a matter of determining the causality: Why did you do that? Who did it? Who is to blame?
What we are seeing today is the transition to the cloud of data with algorithms starting to make decisions based on correlations. And the decisions they make can be very important. One example: A number of banks, such as Kreditech, now assess your creditworthiness on the basis of your personal data as gathered from social networks and e-mails. And, interestingly enough, the data tells them what they need to know via correlations. If you are refused a loan, you have no way of asking what the reason was. As the software developer with whom I spoke told me, "we don't know the reason, either." The algorithm simply reaches that conclusion.
You can think of many other areas – such as deciding a person's creditworthiness, yes, but also, for example, determining the probability that someone will commit a crime – where, in a way, we're already using correlations to make decisions, with the help of algorithms. And then two things happen: The first is that we are replacing causality with correlation. We have secretly moved away from the very substance of the Enlightenment. The second is that we have reversed the sequence of events. In other words, take the police, for example. in the past one first had to commit a crime and then the police would come into the picture. Today, the police can take action on the basis of a possibly higher future probability. Or think of medicine and healthcare. A potential future risk can be predicted for you, on the basis of big data, and that can lead – and this is already happening – to concrete medical interventions.
This is dangerous because in the long run it makes us almost dependent on algorithms. Algorithms become as it were more important than human causality. And people, individuals, have no chance of determining what is right or wrong, because that is basically impossible in the world of correlations, with its ever-so-many different data streams.
And then we would experience something very contradictory, namely modern technology being paired with a backward-looking perspective in our culture, a perspective that reaches back into the Middle Ages, that makes algorithms a matter of fate. We need to resolve this contradiction. Many people are saying we need decisions now. Because the power of algorithms, the fact that we can very rapidly, exponentially upscale processes on the Internet (which is part and parcel of the magic of the digital world), incredibly short times to market – all this means that we urgently need to make these decisions. Because if we don't make them, we will make ourselves dependent. And that would be unfortunate.