An article by Claudia Nemat, Member of the Deutsche Telekom AG Board of Management, Technology and Innovation.
What do I like to talk about most when I meet social opinion makers? We can shape the ongoing process of digitalization. We are not victims, we are architects. But we have to wake up. We need to discuss in detail and actively shape what artificial intelligence and robots mean for our work, education, health and social cohesion. The time is now. We need to deal in facts. Responsibility has to be taken. And we have to give it our all. I’m speaking as a manager at Deutsche Telekom and as a citizen, a mother, a committed European and a former scientist.
At the same time, thanks to the different people we talk to, we continuously have to outwit our own filter bubble. We need different opinions. A trip by Germany’s President to the United States should offer ample opportunity to collect those, whether from members of his retinue or at events in California that will address the opportunities and risks of digitalization. There will be no shortage of prominent representatives from the worlds of science, business and journalism.
Debating technological innovation always means debating ethics, too
Technology and innovation must always and, above all, serve people. And it is people that decide what technology should be able to do and what it definitely should not. Or, to put it in the words of Frank-Walter Steinmeier: “The future of democracy cannot be won without an idea about the democracy of the future.” Germany’s President is certain that the democracy of the future will be shaped in particular by rapid technological progress and that new technology must enable, not incapacitate.
Deutsche Telekom recently became the first DAX company to present nine guidelines for artificial intelligence. These aren’t the product of an echo chamber, but rather the result of countless discussions with other companies, experts and institutions in Germany, the United States and Israel that deal with AI. They set out how we at Deutsche Telekom want to deal with AI. How we want to develop our products and services that are based on AI in the future. After all, as Stanford professor Fei-Fei Li very neatly puts it: “No technology is more reflective of its creators than A.I.” Indeed, algorithms can be used to benefit people (for instance by identifying cancer cells or ensuring diseases are detected early on) and they can also do them harm (such as when scoring algorithms are used to deprive people of their citizen’s rights).
Artificial intelligence must not become a black box
As explained above, the conversation was and is important. These guidelines, which Deutsche Telekom has voluntarily undertaken to follow, are not carved in stone – they are a start, a basis for discussions and further development with others. I always have the guidelines with me. One thing is certainly clear – we don’t want artificial intelligence to become a black box, making decisions that we can no longer comprehend.
That brings us to another salient point – algorithms can embody preconceptions. If only one group of people – such as white, middle-aged men – is programming, then others can be disadvantaged. That is because one particular world view is given preferential treatment, because links are unwittingly made that lead to discrimination, and because only selected problems rise to the surface. These preconceptions must be addressed, and addressed by diverse teams. Ethical and philosophical principles also need to be incorporated into IT training, and the technical basics have to be covered in the arts, too.
Intolerance cannot be tolerated
“Do not forget that tolerance becomes a crime when applied to evil,” wrote Thomas Mann in his 1924 book The Magic Mountain. The same still applies today, as the house in Los Angeles where he spent his exile is opened to the public. It is a kind of operating manual detailing how we must deal nowadays with fake news, hate speech, or marginalization: We must not tolerate it, but speak out against it and act. In the age of digitalization, the fight for values needs to become louder. Artificial intelligence can help with that – as long as we work together to shape how it is used.