Artificial intelligence works with algorithms that draw on a wide range of data. Problems can arise when such data contain gender bias, as AI expert Kenza Ait Si Abbou Lyadini explains.
They go by the names of Alexa, Siri, Sophia, Roberta ... AI-powered smart speakers often have female names or, like "Hello Magenta," Deutsche Telekom's virtual assistant in smart speaker form, have female voices. Just recently, when colleagues were about to "incarnate" a chatbot's avatar as a shapely woman (nicely drawn!), Lyadini felt she had to intervene. She reminded everybody that there's simply no reason why robots and AI-based systems should have any gender at all. "Virtual assistants are there to serve. So, if we always give them female externals, we're tapping into gender clichés."
More diversity in developer teams
Kenza Ait Si Abbou Lyadini is a Senior Manager at the Robotics and Artificial Intelligence Hub at Telekom IT. "We build AI solutions, and offer AI-related advising, for other Group divisions." The team is spread over five countries – Germany, Slovakia, Poland, Hungary and Russia. Great gender equality. "The proportion of women in our team is relatively high, due to all the Eastern European female IT specialists we have." But her basic focus with regard to diversity in developer teams is not just on gender balance, however. "An ideal AI-developer team would have both women and men, and it would be highly diverse with regard to numerous characteristics. For example, it would feature diversities of age group, skin color, sexual identity and social background." What's more, its developers and data scientists would work alongside other types of experts, such as anthropologists, psychologists, sociologists and linguists. Why that kind of diversity? "Because we're using AI to build the world of tomorrow. And for that task, we need to take account of as many different perspectives as possible. We shouldn't be relying solely on white 25-year-old men in hoodies."
As Lyadini explains, algorithms are shaped by their developers and their users. An algorithm's developers and users choose the data that go into the algorithm. Since computer programs cannot identify and question bias, algorithms, as they are trained, will adopt any bias found in their training data. This is not just a theoretical problem. Career portals, for example, tend to show top jobs less often to female users than to male users, reports the newspaper Süddeutsche Zeitung. Medical AI systems are more often trained with data of male patients – with the result that their diagnostic performance is considerably poorer with female patients than with male patients. Facial recognition systems achieve 99 percent accuracy when their subjects are white men, and only 75 percent when their subjects are black women. Why do such biased machine-learning systems even make it to the market? "Very simple: They are usually tested by white men – and they work perfectly for white men."
Algorithms adopt unconscious biases
An example from the world of Amazon illustrates how algorithms can adopt and reinforce unconscious biases. The company discovered that its in-house algorithm for automatically screening job applicants was systematically disadvantaging women. The reason was that the algorithm's developers were training the algorithm with data records tied to previous hirees. From patterns in those records, the AI system was supposed to learn what characteristics Amazon especially values in job applicants. Because the majority of Amazon's hirees had been male, the AI system concluded, logically enough, that maleness is an important selection criterion. "As this illustrates, the data going into the algorithm, and not the algorithm itself, may well be the problem," Lyadini notes. But wait: Wouldn't a highly diverse developer team have trained the algorithm with the same data records? "Perhaps," Lyadini concedes, "but then people would have noticed more quickly that the software was unusable in that form. And, at any rate, diverse teams tend to screen their algorithm data more closely in advance."
This is important, because the data that a machine learns with will always reflect a certain world view. If you don't believe this, just check what kinds of hits come up on a Google search for photos of "mathematics professors." Photos of white men will predominate. You'll have to scroll a long way before any picture of a female mathematics professor appears. And this little exercise becomes even more interesting when you search for pictures of "female mathematics professors." Yes, the first picture shows a woman, but the second is already back to men! "One could argue that such results simply reflect the real world as it is," Lyadini explains. It is indeed true that there are more male mathematics professors than female ones. But if you don't want cyberspace to propagate our real-world biases, and if it bothers you that the term "math whiz" simply "sounds male" to so many people, then you should take a second look at the data you are using for AI training.
"In sum, we need to make developers and data scientists more aware of data bias, to keep AI software from reinforcing and propagating injustice, sexism and racism." As it happens, Deutsche Telekom is working on this issue. It is currently developing an AI-ethics course and relevant rules for its own data experts. Lyadini is convinced that good – bias-free – AI can be a powerful tool for fairness. "We human beings haven't been very good at making our interactions fair and just. I sincerely hope that AI will help us make progress in this respect."
How AI can help de-bias job advertisements"Assertive software engineer (m/f/d) wanted." That kind of language might well turn a woman off – especially, perhaps, a woman who sees herself as oriented to collaboration, and as someone who values teamwork. "Currently, we are developing a prototype of an AI system that will support our recruiters and departments in wording job advertisements."
The aim, Lyadini adds, is to make the company's application process as bias-free as possible. With job advertisements that reach out to all kinds of people, including men, women and people throughout a highly diverse spectrum of characteristics. The AI system is being designed to identify words and phrases that could discourage certain people from applying. By the way, the idea for this emerged at the first AI Hackathon for women, which Lyadini initiated last year in cooperation with the Women@Telekom network. At this year's Hannover Messe exhibition, Kenza Ait Si Abbou Lyadini's commitment in this area was honored with the Engineer Powerwoman Award 2019 https://www.telekom.com/de/konzern/details/kenza-als-mint-powerwoman-ausgezeichnet-568384. Last year, she was awarded the Digital Female Leader Award 2018 (German).
.