Artificial intelligence is a key driver of innovation for Deutsche Telekom. We use AI to optimize customer experiences, solve business problems, increase productivity, and develop innovative products. We stand for the safe, transparent and ethical use of artificial intelligence (AI) and dutifully implement the EU AI Act.
The EU AI Act came into force on August 1, 2024. It is the first comprehensive regulation to regulate AI in the European single market. Its goal is to specifically promote innovation while minimizing potential risks for consumers and businesses. To this end, the AI Act creates a clear regulatory framework and binding standards for the development and use of artificial intelligence. It differentiates between AI systems according to their risk potential. Depending on the risk potential, the requirements will come into force gradually:
- February 2025: AI practices with unacceptable risk that jeopardize the safety and rights of people (for example manipulative systems or social rating systems) are prohibited.
- August 2025: Documentation and information obligations for general-purpose AI models (GPAI) are to be applied.
- August 2026: Obligations for high-risk AI applications (for example in healthcare or human resources management) must fulfill specific requirements.
- August 2027: Requirements for high-risk AI applications that are already subject to a third-party assessment, will apply.
Implementation of the EU AI Regulation
At Deutsche Telekom, an interdisciplinary team of experts is responsible for the implementation of the AI Act. A key milestone in our compliance strategy was the timely EU-wide review of all AI applications for prohibited systems. This audit was successful: There is no evidence of the use of prohibited AI within Deutsche Telekom or in our products.
All technologies, products or platforms that Deutsche Telekom uses or sells must go through a process in which security and data protection aspects are examined. Since 2020, this so-called Privacy and Security Assessment has included a specific digital ethics assessment for AI technologies, thus ensuring effective control mechanisms for the responsible use of AI. The process includes clear rules for the risk-classification, evaluation and assessment of our AI systems.
In addition, the AI literacy of employees is specifically promoted and awareness is raised for the conscious use of AI. Since 2018, a comprehensive and innovative range of training courses on the potentials, functionality and risks of the technology has been available. These include, for example, a gamification application on "Digital Ethics in Action" or an eLearning on the EU AI Act. Prompt-a-thons, in which employees solve real business problems using generative AI, and AI communities also contribute to greater awareness of AI. In addition, the ICARE Check supports employees in dealing with artificial intelligence.
We are convinced that responsible and legally compliant technology development requires clear standards – for ourselves and for our business partners. We assume this responsibility as part of the AI governance at Deutsche Telekom.
BackgroundUnder the EU AI Pact, participating companies voluntarily commit to at least three core actions:
1. Development of an AI governance strategy: To ensure responsible AI use in the organization and to work towards future compliance with the requirements of the EU AI Regulation.
2. Mapping of high-risk AI systems: Systematic identification of AI systems that are classified as high-risk under the EU AI Act (EU AI Regulation).
3. Continuously promote AI literacy and awareness among all employees to ensure ethical standards and responsible AI development.