Media

Verena Fulde

0 Comments

On a first-name basis with AI - how does that impact us?

  • First major empirical AI study focusing on possible social and psychological effects in Germany
  • Allensbach Institute surveyed on the relationship between humans and digital assistants
    • 63 per cent of AI users are fascinated by the performance of generative AI programs
    • One in four people use AI chatbots, another 24 per cent could imagine doing so
    • But artificial intelligence is not yet a substitute for friends
  • Are we becoming a first-response society?
    • Majority of users trust the output of AI
    • Half check the answers occasionally
    • Analytical thinking and personal knowledge remain important
  • Me. Me. Me. AI does not contradict.
    • AI gives the feeling of a "real" dialogue, just like with a real person: One in five users forgets they're talking to a machine
    • AI as a psychotherapist or confessor – rather not (yet)
Use of AI chatbots

Use of AI chatbots © Deutsche Telekom

Only a few people in Germany can imagine having a conversation with an AI chatbot as they would with a human. Among 30 to 44-year-olds, however, one in ten can. These are the findings of the first empirical study on the societal, cognitive; and social impact of generative AI (artificial intelligence) in Germany. The Allensbach Institute for Public Opinion Research was commissioned by Deutsche Telekom to conduct a representative survey on the use and potential impact of digital assistants and social bots on everyday life and society. Among other things, the researchers identified a tension between convenience and skepticisms among many AI users. It is convenient to trust AI results, even if you are aware of their susceptibility to error.

A dialogue partner that is always there, never gets sick and usually politely agrees - this is how generative AI presents itself. It also seems to know almost everything. This makes it attractive: one in four people in Germany over the age of 16 already uses generative AI, for example in the form of AI chatbots for research, text creation or translations. 39 per cent of users even use AI chatbots daily or at least once a week. And the trend is rising. This is astonishing for a technology that is just two years old. It has never spread so quickly and been used without hesitation.

The study "AI assistants and us. Fast food knowledge and virtual love." takes a detailed look at the impact of the technology. It is based on data from a quantitative representative survey of more than 1,000 people over the age of 16 across Germany. A qualitative survey of AI experts and tech-savvy consumers rounds off the picture.

63 per cent of AI users are fascinated by what generative AI can already do today. "The fact that users of generative AI are so enthusiastic about its performance also explains why more than two thirds assume that they will use these tools even more frequently in the future," says Dr Steffen de Sombre, head of the Allensbach study. The judgement as to whether this technology is more of an opportunity, or a risk depends heavily on the individual's own usage. Frequent users tend to see the opportunities. Non-users see the risks.

Smart companions: are machines better friends?

Concers due to the difficult distinction between humans and machines

Concers due to the difficult distinction between humans and machines © Deutsche Telekom

Dialogue with digital assistants already often creates the feeling of an exchange with a real person. 22% - and therefore more than one in five - of frequent users have already forgotten that they are talking to a machine during a dialogue. This difficulty in distinguishing between a human dialogue partner and a machine worries most users.

Nevertheless, AI is no substitute for friends. The qualitatively surveyed users still lack human charisma, personality, empathy, and the whole range of emotional vibes. The experts surveyed also pointed out that, above all, there is a lack of shared real-life experiences, which are an important component of a friendship. Deep loneliness can be alleviated by communicating with an AI chatbot. However, the experts do not consider this to be a sustainable solution. Compared to real human encounters, this form of communication is inadequate.

AI as a psychotherapist or confessor - not (yet).

Ban on AI in the treatment of mental health issues

Ban on AI in the treatment of mental health issues © Deutsche Telekom

An interesting contradiction arises when it comes to the question of whether digital chatbots can take on the role of a psychotherapist. This idea was overwhelmingly rejected in the quantitative survey. Around two thirds of people who have at least heard of AI chatbots would even generally prohibit the use of AI for this purpose. On the other hand, the experts surveyed can imagine using them in the future to support treatment, long waiting times for a therapy place or as a diagnostic tool. However, experts also reject AI chatbots as a stand-alone tool for treating mental health problems.

However, users also see an advantage: 29 per cent say that they could talk to an AI chatbot about anything without it becoming embarrassing. Experts confirm that this can also be an advantage in psychotherapy.

However, when it comes to sensitive issues such as relationship problems, falling in love, serious illnesses and religious beliefs, people still prefer to confide in their friends rather than a bot, if at all. Only 0.6 per cent of generative AI users have ever asked the artificial assistant for advice on private problems such as heartbreak or loneliness. On the other hand, around one in ten can imagine using it for relationship advice, questions of faith and conscience or simply as a conversation partner.

AI chatbots are currently most frequently used to search for information, to translate, create or revise texts and to have something explained or summarized. Around two thirds of users find it particularly helpful that such programs make a lot of knowledge accessible in an uncomplicated and easy-to-understand way.

Me. Me. Me.  AI does not disagree.

Concerns about effects on interpersonal interactions

Concerns about effects on interpersonal interactions © Deutsche Telekom

52 per cent of users fear that increasing communication with AI programs will have an impact on personal interaction between people. The potentially increasing loss of conflict resolution and communication skills as well as interaction skills is a critical point raised by the experts. Communication with an AI is always unilaterally  centered around the needs of the user. In real relationships, this could lead to people forgetting how to endure contradiction and resolve conflicts. Because the real counterpart - unlike an AI - also has needs and desires.

However, there is also a major advantage in the "human-like" design of communication with AI. It offers simple and intuitive handling. This can reduce inhibitions and thus facilitate access to new technologies.

"AI offers us new opportunities, but it doesn't solve every problem by itself. This technology allows us to address known challenges more quickly. And more efficiently. Chatbots with generative AI, for example, provide us with impulses and perspectives at the touch of a button," says Claudia Nemat, Board Member for Technology and Innovation at Deutsche Telekom. "People must always be at the center of our technological advances. It's about practical solutions for the challenges of our time."

Fast food knowledge: What sounds so eloquent must be right.

Tools such as Perplexity, You.com and SearchGPT are changing the way we search for information and deal with knowledge. We no longer have to search and trudge through long lists of links. We find answers. Well-written, plausible-sounding and concise. Will we become a society of first answers? Or do we keep on researching?

Trustworthiness of search results

Trustworthiness of search results © Deutsche Telekom

More than half of users (55 per cent) consider the output of AI assistants to be trustworthy. Among users who use chatbots frequently, the figure is even higher at 64 per cent.

Answers are only checked if there is an initial suspicion. Convenience beats skepticism here. However, half of users (48 per cent) at least occasionally check whether the answers are correct. The experts surveyed warn that the linguistically good wording of the answers can create the impression of correctness and completeness.

Media literacy is becoming increasingly important: be brave and use your own mind.

Teaching media skills will become even more important in the future, say the experts. The aspect of not accepting AI results uncritically should be dealt with explicitly. Otherwise, these systems could even become a threat to democracy.

Concerns about manipulation by AI chatbots

Concerns about manipulation by AI chatbots © Deutsche Telekom

Almost two thirds of people who have at least heard of AI chatbots are worried that their views and opinions will be manipulated by these programs. Experts believe that the personalization of information is increasing the risk of manipulation. The tailoring of information is becoming increasingly individualized. This in turn can reinforce opinion bubbles. Another contributing factor is that the large generative AI systems are essentially trained with data from the freely available internet, and discriminatory views are incorporated despite filters.

"Be brave and use your own mind, especially after the first AI response. This motto applies more than ever," comments Claudia Nemat.

This makes analytical thinking and personal knowledge all the more important, as they are the basis for being able to check the accuracy of AI results. This makes references, such as those offered by tools like Perplexity, extremely important.

Why is Deutsche Telekom addressing these issues?

Our networks connect people and open up access to digital technologies. One of the biggest trends of our time is generative AI. We utilize it in many different ways within the company. And we offer our customers numerous solutions for business and everyday life. As Deutsche Telekom, we also see it as part of our digital responsibility to accompany and scrutinize the AI trend. With our study, we want to raise awareness, lower inhibitions, and make it easier for people to get to grips with the increasingly complex topic of AI.

The study is in line with our digital responsibility initiative. And also with the self-binding AI guidelines that Deutsche Telekom adopted back in 2018.  The Deutsche Telekom Foundation is also committed to education - particularly in the mint subjects (mathematics, IT, natural sciences, technology) and in dealing with digital media.

Study design

Two-stage design:

  • Qualitative survey with in-depth oral interviews with consumers and selected experts
  • Representative survey for which 1,040 people over the age of 16 were interviewed across Germany in face-to-face interviews from 16 to 27 September 2024

 About Deutsche Telekom: Deutsche Telekom at a glance

Woman kissing her laptop screen.

Fast food knowledge and virtual love

A study by the Allensbach Institute and Deutsche Telekom examines the effects of digital assistants and social bots.

FAQ