"I want to supervise this study," thought Dr. Steffen de Sombre immediately when he heard the title and description of the study "AI and Us. Fast Food Knowledge and Virtual Love." As Project Manager at the Allensbach Institute, he oversees a variety of study topics – from health and sustainability to religion. The topic of AI was in low demand until three years ago. Nowadays, many studies include questions about AI. However, he had not yet encountered an empirical study that specifically addressed the societal, cognitive, and social impacts of generative AI. The Project Manager of the Allensbach Institute shares what particularly attracted him to the study, what surprised him – and what did not.
Mr. de Sombre, thank you for taking the time to speak with me briefly. You were eager to supervise this study. Why do you think this topic is currently engaging so many people – including Deutsche Telekom?
Telekom stands for digital communication. So, the topic fits very well with the client. There is no doubt about the relevance of the topic. I just have to open a newspaper or look online to see the many different facets: from regulation in the form of the AI Act to the societal consequences of using AI. I found the research questions very exciting. Also, because I now often use chatbots myself, both professionally and privately.
Speaking of which: How much did you see yourself in the results?
I saw myself a lot. Just like the respondents, I felt a sense of fascination the first time I used it. Just how the bot expresses itself linguistically – I was initially quite amazed at what AI can do. A few years ago, I would have thought it was absolutely impossible. However, after the WOW effect, I also felt a sense of disillusionment. In some aspects, AI is like a charmer that knows how to formulate things smoothly. The way it expresses itself and the chosen phrases also instill trust in users. But it is worth checking carefully – because behind this polished surface, there is sometimes a lot of emptiness. But with healthy skepticism, the fascination still prevails for me.
You are addressing a result that the study also highlighted. Two opposing forces are at play: convenience and skepticism. When there is initial suspicion, users double-check the information provided. But over time, the inner laziness might win, and any information could be taken at face value, right?
Yes, absolutely. I think it always depends on how relevant the accuracy of the information is. In a professional context, for example, the information usually has to be correct. I can imagine that users check it more often in such cases. In a private context, the information might be accepted more quickly and then passed on. As experts also say, this can indeed pose a threat to democracy. Keyword: Bubbles of opinion. A very selective slice of reality is passed on as fact.
What surprised you the most about the results? What surprised you the least?
I was not surprised that the use of generative AI is more pronounced among younger participants and those with higher education. That was to be expected. What surprised me, however, was the great dynamism of the development: Besides a quarter of the population already using AI chatbots, another quarter can imagine using them soon, so they will likely become users soon. And those who are already using it expect a significant increase in their usage. Additionally, I was surprised that large parts of the population demand bans on AI applications in areas where I expected high approval. For example, nearly half of the users say it should not be used for diagnosing illnesses. This shows me that beneath the surface, alongside fascination, there is still a lot of fear of the technology. Anything that can have significant personal consequences without my control – diagnosing illnesses, treating mental health issues, or selecting job applicants – users are still very skeptical about that.
At the end, a look into the future: What is your forecast? More human or more machine?
Definitely more machine. It will gain more ground in many application fields and assist us in many areas. There is a risk that certain cognitive abilities in society will be lost or weakened. But the experts in our study do not necessarily see this as threatening. If AI is used to outsource one's thinking – then it is a mental regression. But if I use AI with my analytical mind and knowledge and seek targeted support, then artificial intelligence can be an extension of one's cognitive abilities. In the end, it is a question of use that determines whether it is a step back or a step forward. If I avoid thinking out of convenience: regression. If I use AI to support my cognitive abilities: progress.
The interview was conducted by Kathrin Langkamp.