Archive

Archive

Blog.Telekom

Verena Fulde

0 Comments

Not my problem – it’s so far away: a remote dystopia?

Once people start to talk about artificial intelligence (AI) and China, it’s not long until the term “dystopia” comes up too. And, as a matter of fact, when you think of the Chinese Social Credit System, it does sound like a state that has perfected the art of monitoring its own citizens. The People’s Republic intends to make this system obligatory for all residents by 2020.

China and the algorithms

The Chinese "social credit score" shows how artificial intelligence can change our lives.

The Chinese government wants to know how trustworthy its citizens are. This would involve recording their behavior and influencing it to go into a certain direction. Well-behaved citizens would then receive benefits such as better treatment in hospital or would be allowed to send their children to a prestigious private school.

Those who behave inappropriately have points deducted. This can result in being unable to book plane or train tickets and receiving worse hotel rooms, for up to a year.

A bad score can even impact on friends and family. If you call a defaulting debtor, you will be played an automatic message informing you about their outstanding debts. That would make some people consider whether they want to maintain contact with such a person or not.

Can this happen in Europe? 

It sounds shocking and when we hear about it in Europe, we feel secure on the foundation of human rights and democracy we have here. We dismiss these systems thinking for certain, “Nothing like that can happen to us!” But is that actually the case? In many European states we can see that totalitarian ideas are becoming more and more popular. It also doesn’t take long to find projects in which AI has been implemented for oh so practical matters, without a second thought.

Algorithmwatch and the Bertelsmann Stiftung have conducted a study listing different uses of software for administrative decision-making.

Here are a few examples:

  • Hungary, Greece, and Latvia are now testing a virtual border official intended to detect when migrants are telling false stories.
  • In Finland, private emails from job seekers are analyzed in order to create a character profile.
  • Machines are used in Italy to help decide who will receive medical treatment.
  • There are automated systems in Denmark intended to help identify children who are being neglected.

All of these things are based on artificial intelligence (AI). Therefore we, as a society, urgently need to discuss what an ethical framework for AI should entail and whether there is a need for corresponding legal regulations, so certain things can be prohibited.

It’s not only state bodies using AI, private companies are implementing it for their own purposes too.

Just think of credit bureaus like Schufa in Germany. It is long established and its evaluation is decisive for banks in Germany when granting credit, but the company is keeping quiet about how it calculates its scores. Scoring systems are therefore also well-established here.

The Advisory Council for Consumer Affairs (SVRV) – a panel of scientists, consumer watchdogs, and industry representatives - see the danger in the private sector having a comprehensive scoring system: “The development of super scores by international commercial providers may also become an issue in Germany. Law makers and supervisory authorities ought to prepare themselves to check whether measures should and can be taken so that super scores do not become commercially available in Germany too.”

Any form of scoring must be transparent

Federal Minister of Justice and Consumer Protection Katarina Barley wrote on Twitter: “Any form of scoring must be transparent and eliminate the possibility of discrimination and fraud. We will therefore examine the Advisory Council’s suggestions in depth and without undue delay.”

One of the recommendations that I found very enlightening was that consumers should be able to find out how scores are distributed among different groups with varying protected characteristics. This could be proof of discrimination within the algorithms. If that’s technically feasible, I think it would be very good.

I’m very curious to see how the discussion develops and how Europe positions itself on the matter.

Dr. Mareike Ohlberg of the Mercator Institute for China Studies takes a closer look at China’s social scoring system in our video interview. She explains how far along the system really is and what the Chinese think of it themselves.

China and the algorithms

China and the algorithms

The Chinese "social credit score" shows how artificial intelligence can change our lives.

FAQ