The topic of artificial intelligence is shaping the technical developments of our time. ChatGPT has brought AI into people's living rooms. Every week we read about other new products or services that contain artificial intelligence. What does this mean for data protection? I think: The perfect AI formula is more than just an algorithm. AI and data protection must go hand in hand and be considered together from the outset. We need a new approach here: privacy by strategy.
Back in 2019, I asked myself whether artificial intelligence (AI) could be a data protection offender. Of course, it is not per se, but it can become one depending on how it is designed and used. The basis for AI applications is and remains data and a program code that processes this data. Numerous models also process personal data - in some cases very extensively. This is why, in addition to technical feasibility, data protection is an important issue for AI and vice versa.
When industry leaders present new AI technologies, they are characterized, among other things, by the fact that they can handle ever larger amounts of data. Particularly striking examples of this are large language models (LLM) such as ChatGPT or image generators such as DALL-E. These algorithms use huge amounts of data to learn to generate their own content, which can sometimes look or sound deceptively real. Various questions quickly become the focus of these generative AIs: What data is used to train the models? Is it being handled correctly? Is the data used for more than the user might assume? Depending on the answer or practical application, there can quickly be far-reaching (data protection) legal consequences.
In the hype surrounding AI, too often only the result is marveled at. However, the more we look at the computing path and the source material used to train the AI, the more it becomes clear that an AI and its creators must work cleanly in every respect from the start of the project. Only those who master the balance between technical feasibility on the one hand and legality (not just in terms of data protection) on the other will be successful with AI solutions and the associated business models. However, privacy by design is no longer enough. An even earlier approach is needed here: privacy-by-strategy.
This means that data protection should be strategically incorporated much earlier. The data protection organization must be aware of the upcoming digital trends on the market and in the specialist communities in order to be able to support the company's strategic approaches from the outset. This enables management and the responsible operational units to determine at an early stage whether an approach can be implemented in principle. In the best-case scenario, this results in a viable AI strategy or at least initial guidelines for a feasible way to implement an IT solution.
The "classic" privacy-by-design approach takes effect too late to set new digital trends in a company in a smart and resource-saving way. But the new approach does not make privacy-by-design obsolete. It remains important to keep data protection in mind when designing the technology or IT solution.
As Group Privacy, we are looking at the topic of AI in all its facets and are keeping a close eye on how it develops. It is important to us that we communicate data protection awareness, which is so important today, to all our colleagues. This helps our product developers and programmers to find the best way to develop new IT solutions. In our Privacy Security Assessment (PSA process), which all new products and services have to undergo, the new Privacy Requirement Artificial Intelligence (pdf, 42.1 KB) are now being used. They supplement the already very extensive requirements for product and IT developments with a catalog of questions specifically tailored to AI.
We are thus continuing the previous AI guidelines but concretizing their content in many places. The data protection requirements were developed based on the GDPR, the "Binding Corporate Rules Privacy" Group guideline and other elements. As Data Protection and Security, it is important to us that uniform standards apply within the Group when evaluating and developing AI systems.
The guidelines also reflect the early strategic discussions that we hold with key stakeholders within and outside the Group as part of our "Privacy by Strategy" advisory realignment. This enables us to proactively design and communicate suitable requirements for future products and IT solutions.
The new data protection requirements provide clear guidelines and thus facilitate development work in the field of artificial intelligence. We do everything we can to ensure that artificial intelligence does not become a data protection offender.
And we really do have everyone in mind, not just the developers, but also the users. The example of ChatGPT shows how artificial intelligence can be turned into a data protection offender. This makes it all the more important for the developers and operators of an AI solution to constantly keep an eye on how artificial intelligence develops during operation. On the other hand, it is important to constantly sensitize users and remind them of their personal responsibility.
In terms of data protection, we see the many opportunities that this technology offers and that we use for the benefit of our customers and employees. The protection of personal data is important for this. Our new data protection requirement provides the basis for this. We will closely monitor further developments in the field of AI and keep our requirements and awareness measures up to date. We are also happy to make our requirements transparent and put them up for discussion. We welcome constructive professional criticism. Please feel free to contact us.