Dominik Mohilo is an editor and IT expert at the Munich-based communications consultancy PR-COM, which mainly supports tech companies. (Source: PR-COM)
26. Apr 2024


By Dominik Mohilo, editor and IT expert at the Munich-based communications consultancy PR-COM, which mainly supports tech companies.

“The internet is not a legal vacuum.” With this often smirked-at quote, the former Federal Minister of Justice Christine Lambrecht may have convinced the unsuspecting public. The fact is, however, that it is de facto impossible for operators of websites that allow third-party content to enforce German law across the board. The law to enforce the EU DSA (Digital Services Act), which has now also been passed by the Bundesrat after the Bundestag, does nothing to change this. On the contrary: it raises new problems that will cause even greater damage to society.

On social networks such as X (formerly Twitter) and on sales platforms with marketplaces such as Amazon or video portals such as YouTube, billions of users sometimes produce questionable content. This ranges from so-called hate speech and incitement to hatred to offering counterfeit fashion or technology items. But how are social media such as X, where users are estimated to publish at least 500 million messages, videos or images every day, ever supposed to recognize and delete all the questionable content? The sheer volume of potentially legally relevant content makes manual detection and deletion practically impossible. Even the communities will not be able to help, despite the “reporting and action procedures” that are now required by law. The operators must check and process the relevant reports “without discrimination and in compliance with fundamental rights, including the right to freedom of expression and data protection”. This is at least as time-consuming as searching for the violations yourself, as the number of reporting users and posters will almost certainly balance each other out.

At first glance, AI seems to be the solution. However, what threatens is so-called overblocking, i.e. the proactive filtering and deletion of content by algorithms, regardless of whether it is legally relevant or not. The use of artificial intelligence is therefore problematic at the very least and may even end up restricting freedom of expression. In general, the question is whether the adoption of the DSA will not endanger freedom of expression: the Digital Services Act not only obliges platform operators to delete illegal content. They must also review content that has an “actual or foreseeable adverse impact on social debate and electoral processes and public safety” – whatever that means. Platform operators will therefore prefer to delete more controversial content than too little because of the threat of painful penalties. Incidentally, the law also includes the option of shutting down platforms such as social media outright in a situation classified as a crisis. Anyone can imagine what this means for free social discourse and freedom of expression.

The dream of a stronger and better regulated Internet is thus taking shape on paper, but brings with it a whole host of implications that could threaten our liberal society. Not to mention enforceability by the already overburdened judiciary. In principle, of course, the approach is the right one: large platforms in particular must be aware of their responsibility to society and restrict illegal content. However, only time will tell whether we have done ourselves a favor or a disservice with the Digital Services Act.