Disinformation increasingly challenges our society. It's becoming easier to create and harder to detect. Artificial Intelligence (AI) is part of the problem, but more importantly, it's part of the solution. With the innovation program for sustainable solutions, X-Creation, Telekom is working on innovative solutions to combat disinformation and protect our democracy. A dedicated podcast episode, ExplAIned, addresses this topic.
The Pope in a white down jacket, short emotional texts, sensational headlines – disinformation comes in many forms. Disinformation refers to intentionally spread, misleading, or false information aimed at deceiving or manipulating people to serve one's interests. An example of disinformation is deepfakes, which are realistic looking but fake images, audio, and video content. The use of AI makes it increasingly easy to create and spread disinformation. This also makes it harder to distinguish between real and manipulated content.
AI and Disinformation – Between Deepfakes, Chatbots, and Algorithms
The European Council defines AI as the "use of digital technologies to create systems that perform tasks generally believed to require human intelligence." AI is not a new technology; research has been ongoing for decades. But with applications like ChatGPT, AI is now becoming more accessible to the public.
Generative AI is particularly in focus. These models learn from large datasets and generate new content by imitating patterns based on probabilities. Increasingly, generative AI is being misused to create deepfakes to deceive society, groups, or individuals for political or economic interests.
AI is also misused to spread disinformation. Chatbots, for example, are programs that simulate human conversations based on AI. They can deliberately spread disinformation and manipulate opinions. Algorithms also play a central role. They determine which posts users see on social media platforms, aiming to achieve the highest possible interaction rates.
Challenges for Democracy
Societal awareness of deepfakes is present. According to a Bitkom study from last year, 60% of respondents see deepfakes as a threat to our democracy. However, 81% also admit they cannot recognize deepfakes. Leading experts worldwide agree. In the 2024 World Risk Report, they define AI-generated disinformation and misinformation as the world's greatest global risk over the next two years.
The threat of disinformation to democracy is real – being aware of it is especially important in a super election year like 2024. Telekom is convinced that AI is also part of the solution in combating disinformation. While it becomes increasingly difficult for the human eye to distinguish between true and false, AI can step in and help detect irregularities and inconsistencies in videos and images based on pattern recognition. Forensic algorithms already analyze the metadata of images and uncover anomalies. AI can also support people in monitoring social networks and detecting suspicious activities indicating the spread of deepfakes. Various solutions are currently being tested.
Combating Disinformation with X-Creation - ExplAIned Podcast Episode
Last year, Telekom's subsidiary T-Systems launched the innovation program X-Creation to address sustainability challenges – with support from the UN and the EU. The special aspect: partners, customers, service providers, and experts – both from Telekom and outside – collaboratively develop innovative solutions that positively impact society and the environment. Design teams work on so-called challenges over a fixed period of 3 months, often longer.
Telekom's sustainability division could also introduce a challenge within the initiative #NoHateSpeech. Together with partners like CORRECTIV, Das NETTZ, T-Systems, AWS, MI4People, Google, and other communication experts, they worked intensively on "protecting our democracy from the risks of AI-supported disinformation." Initially, the focus was on defining the problem: Who is susceptible to disinformation? How do we reach this group? What approaches already exist? How can AI be positively used? What does science say?
Based on this, various solution approaches were developed. One concept particularly dear to us: the development of an app, the News-Profi-App, to easily and straightforwardly verify disinformation. The core of the app is sharing. Following the motto "Share it with the app first, then with the world," questionable content should be checked with the app using AI to find trustworthy information, such as fact-checks, and then share the results back to the source of the disinformation. Other features like content in simple language are planned. The app is generally aimed at everyone. By enabling sharing the results back to the original source, we also aim to reach the group that is distant from socio-political topics, very receptive to disinformation, and quickly shares it without active questioning. The core team is currently seeking partners and sponsors to contribute to the expansion and realization.
Those interested in actively contributing to the app development, for example, with news content, professional or technical expertise, or funding, can directly contact the core team via the contact form here.
In the podcast ExplAIned – Human Views on AI, Sabrina Haag from Telekom's sustainability division and Jens Weidemann from X-Creation provide insights into the implementation of the challenge, the idea, and the development of an app against disinformation. Listen to the podcast here (Podcast available in German):
🎧 Listen on Spotify:
https://open.spotify.com/episode/7Fxd6JPL8P3G4kcjF97ZPB
🎧 Listen on Apple Podcasts:
https://podcasts.apple.com/us/podcast/ki-im-esg-bereich-hilfe-und-problemfeld-zugleich-ai/id1699738086?i=1000661995771
🎧 Listen on Telekom.com:
https://www.telekom.com/de/medien/podcasts/explained-human-views-on-ai
➡️ Links to other important podcast platforms:
ttps://explained-human-views.podigee.io/33-new-episode