The Digital Services Act (DSA) has been in force since February 2024. This EU law requires platforms to provide greater online security, thus strengthening users' rights. In this interview, Josephine Ballon, managing director at HateAid, explains how the new rules against hate speech and disinformation are being implemented and what role the Bundesnetzagentur and trusted flaggers are playing in this.
The Digital Services Act is designed to better protect us from illegal content, hate speech and disinformation while also strengthening users' rights. What does the Bundesnetzagentur have to do with it?
The Bundesnetzagentur is a German regulatory authority that protects the interests of consumers and businesses. It monitors the electricity, gas, telecommunications, postal and railway markets and ensures that they operate fairly and transparently. Under the Digital Services Act (DSA), the Bundesnetzagentur also assumes the role of Digital Services Coordinator (DSC). This ensures that online services in Germany fulfill their obligations to protect against digital violence, such as hate speech or disinformation.
You are a member of the Digital Services Coordinator (DSC) advisory board. How does the DSC work and how does society benefit from it?
The Digital Services Coordinator (DSC) acts as a central point of contact for users who wish to report violations of the Digital Services Act (DSA). For example, if an operator fails to provide a comprehensible reason for a decision to remove or leave content in place. Or if you want to report illegal content to online providers but are unable to do so because there is no reporting channel.
In addition, the DSC decides on applications from institutions and organizations in Germany that apply to be trusted flaggers. The advisory board has no authority over the digital services but is supposed to advise the DSC.
The Digital Services Act (DSA) stipulates that each EU member state can appoint trusted flaggers. Who or what is behind this?
Trusted flaggers are organizations that have special expertise in identifying illegal content. For example, content that is defamatory, libelous, or incites violence is illegal. The trusted flaggers are supposed to point out such content to platforms and thus help them to identify and remove violations more quickly. Their reports are given priority over those of individual users.
The DSC reviews the applications of organizations and institutions that apply to become trusted flaggers according to the criteria of the DSA. These are: independence, proof of special expertise, objectivity and diligence. In addition, trusted flaggers must prepare a report on the reports submitted once a year. In addition to information on the transparency of the organization, the report must also include the number, type and outcome of the reports.
Why are trusted flaggers from civil society helpful?
It is important to note once again: the operators are generally not liable for illegal content uploaded by users. According to the DSA, they only have to take action if they are made aware of content. Trusted flaggers receive privileged reporting channels so that incoming complaints about them can be checked more quickly by the platforms.
We at HateAid have also applied to become a trusted flagger and hope to be accepted so that we can provide even better support to those affected by digital violence. They often turn to us because they have unsuccessfully reported illegal content such as insults, defamation or incitement to hate speech. We can then hopefully offer effective support here.
On YouTube, we have been part of their own trusted flagger program since 2020. Civil society organizations like HateAid bring unique real-life experience to the table. We act from the users' perspective and can report problems on and with platforms very quickly. Or we can expose where platforms find and exploit gaps in in the regulations of the DSA in order to circumvent them.
There are concerns that trusted flaggers could restrict freedom of expression. How do you assess this risk?
In my opinion, the discussion about trusted flaggers is an artificial debate driven by political motives. Trusted flaggers don't have a magic delete button for content. All they do is report content on online platforms. The platforms then have to check and decide for themselves. To depict the work of trusted flaggers as censorship not only completely misses the point, but it's also dangerous. It can intimidate and deter organizations that want to apply. Because they are the ones who experience public attention, hateful statements and disinformation the most.
In your opinion, can the spread of hate speech and disinformation be stopped?
It's an enormous challenge. In the age of generative AI, you almost can't believe anything you see on the internet without checking it. Digital violence is everywhere. This development is destabilizing us as a society and our democracy. That is why we must all have an interest in not standing idly by. And that includes stricter regulation of social networks and their business models. As long as they continue to profit from the millions of posts and shares of hate speech and disinformation, and the associated advertising revenue, they will retain their power over public discourse.
It is incredibly difficult to change that just by supervising the platforms. Above all, creating transparency about what really happens on the online platforms is a challenge. They are not a pill that you can take to the laboratory and examine. Politicians and regulatory authorities depend on what is shown to them on the platforms. To verify or refute this, they usually depend on random findings from civil society or academia.
What steps can our readers take themselves to actively combat disinformation?
Report content to the trusted flaggers so that they can submit it and don't let false claims about the organizations that work as trusted flaggers go unchallenged. In our user guide, you can read about how to enforce your rights on social media platforms under the DSA rules. This includes, for example, what you can do if platforms do not respond to your reports. If you want to become a trusted flagger on YouTube yourself or support our work there, you can find more information here (available in German).
About HateAid
The organization advocates for human rights in the digital space. HateAid provides comprehensive advice and support to anyone affected by digital violence. The non-profit organization educates politicians, the judiciary and business about hate on the internet. It proposes concrete solutions for a network in which freedom of expression is preserved and participation is made possible for all. HateAid is a partner of the Telekom initiative #NoHateSpeech.