Uncertainties and Good Intentions: The European Commission's Guidelines for Mitigating Systemic Risks in the Context of Electoral Processes
I. Introduction
In 2024, more than eighty countries around the world will hold elections. Over two and a half billion people – more than a quarter of the global population – will go to vote this year. This is an unprecedented event for democracy: the largest electoral year in history. Several member states of the European Union will hold elections this year and, between June 6 and 9, they will also elect the deputies who will represent them in the European Parliament for the next five years.
With the European elections just around the corner, on March 26, 2022, the European Commission published the “Guidelines for Providers of Very Large Online Platforms and Very Large Online Search Engines on the Mitigation of Systemic Risks for Electoral Processes.” These apply solely to the electoral processes of Union member states at the national level and for the European Parliament elections.
II. Guidelines as Risk Mitigation Measures
The Guidelines were issued by the European Commission within the framework of the risk detection, analysis and mitigation system created by the European Union Digital Services Act (hereinafter, DSA). Article 34 of the DSA obliges very large online search engines and platforms (hereinafter: VLOPs and VLOSEs) to detect, analyze, and assess systemic risks arising from the design, use, or operation of their services. Among the risks to be assessed is the “actual or foreseeable negative impact on civic discourse and electoral processes.” Article 35, in turn, requires these providers to implement "reasonable, proportionate, and effective measures tailored to the specific systemic risks identified in accordance with Article 34”, and empowers the Commission to publish guidelines for risk mitigation, containing best practices and recommendations for possible measures. Although adoption of these guidelines is not mandatory, the European Commission warned providers that if they do not follow them, they must “prove to the Commission that the alternative measures taken are equally effective in mitigating risks.” It is important to explain that, by applying the general rules of Articles 34 and 35 of the DSA, these guidelines are not exhaustive, meaning that other measures not specified in the document could be applied, based on the risks identified by each platform in the products and services they offer.
Article 35 (3) of the DSA stipulates that, prior to publishing guidelines on systemic risk mitigation, the European Commission must conduct public consultations. The consultation, to which CELE contributed comments, took place between February 8 and March 7, 2024. The Commission published a draft of the Guidelines along with a questionnaire about them that could be answered, in full or in part. The official summary of the contributions received can be downloaded from this link.
III. On the Content of the Guidelines
The Guidelines, published on March 26, 2024, refer to platforms as “important forums for shaping public debate and voter behavior.” They warn about a series of phenomena related to these platforms that create an “increased risk” to the integrity of electoral processes. Among these risks, they identify threats related to Foreign Information Manipulation and Interference (FIMI), the proliferation of hate speech, disinformation, extremist content intended to radicalize individuals, and content generated using generative artificial intelligence. In response to this landscape, the Guidelines propose a series of recommendations for VLOPs/VLOSEs providers to adopt before, during and after elections.
In general terms, the Commission recommends that platforms consider the impact of risk mitigation measures on fundamental rights, especially freedom of expression, including political expression, parody, and satire.
More concrete recommendations include measures such as disseminating official information about the elections; media literacy initiatives; measures to contextualize posts; increased transparency regarding platforms' recommendation systems and greater user control over their feeds; more transparency and information about electoral propaganda; cooperation with national authorities (limited to sharing official information and collaborating with information for risk analysis and mitigation), independent experts and civil society organizations; collaboration with independent media, fact-checkers, academia and other relevant stakeholders in initiatives to improve the identification of reliable information, and the implementation of internal incident response mechanisms involving the platforms' senior management. An important recommendation is to have content moderation resources in the local language and with knowledge of national and/or regional specifics, and to designate specific teams ahead of each election, equipped with resources proportional to the defined risks for the election in question, with the relevant expertise in content moderation, fact-checking, cybersecurity, disinformation and FIMI, fundamental rights, etc.
Regarding specific content, platforms are recommended to take measures to address illegal content, such as incitement to violence and hate, as such content can impact freedom of expression by inhibiting or silencing voices in democratic debate. The Guidelines also recommend establishing measures to reduce the prominence of disinformation, such as demonetizing content that has been declared “false” after a fact-checking process or originating from accounts that have already published false information. Additionally, it suggests preventing advertising systems from creating incentives for the dissemination of disinformation, FIMI (Foreign Information Manipulation and Interference) content, hate speech, and extremist or radicalized discourse. Furthermore, the Guidelines propose setting up procedures to detect and stop the manipulation of the service through the creation of “inauthentic” accounts or bot networks, or the deceptive use of a service and implementing risk mitigation measures related to the creation and dissemination of content generated with generative AI.
IV. Analysis
We agree with the European Commission in the role and relevance that it ascribes to internet platforms as one of the main forums for the circulation of political ideas and disputes over meaning. In Europe, more and more people are using them as one of their primary sources of information. For this reason, we particularly value the Guidelines' reference to the importance of protecting freedom of expression and pluralism in implementing these measures and the inclusion of the recommendation for human rights impact assessments, as suggested by CELE during the consultation process.
The European Commission, however, operates on two assumptions that are not fully proven. On the one hand, they assume that misinformation is an efficient cause of polarization, despite academic studies being inconclusive on this matter. Some studies suggest that misinformation is a consequence of polarization, rather than its cause. On the other hand, the adoption of the recommended measures would lead, in some cases, to especially strict controls over speech during electoral periods. The concerning underlying assumption here is that greater controls over speech should be in place during electoral contexts. On the contrary, vigorous political communities need a broad, robust, and uninhibited public debate that creates the conditions for all opinions to be heard, and this is only achieved through a greater circulation of ideas. In any case, during election periods, political speech must be especially protected (cf. OHCHR, “Monitoring Human Rights in the Context of Elections”, p. 9). It would have been interesting to see a provision requiring a higher degree of certainty than usual to moderate content on issues of public interest concerning the electoral process, public figures or candidates for elective office, during the term of the campaigns.
Strictly speaking, these two errors are not unique to these Guidelines but are rather inherited from the DSA's “systemic risks” approach. According to Articles 34 and 35, the greater the risks, the more intense the mitigation measures implemented must be. This interpretation would constitute an explicit deviation from the provisions of Article 10 of the European Convention on Human Rights and in Article 19 of the International Covenant on Civil and Political Rights, which stipulate that state limitations on freedom of expression must be prescribed by law and must be necessary in a democratic society. Under the DSA, the nature of the proportionality test that limitations on freedom of expression must pass under international human rights standards is altered in two ways: first, the scrutiny is “outsourced” to the platforms, which must identify, analyze and mitigate risks; second, the DSA loses sight of the requirement in international human rights law to use the least restrictive means (“mitigation measures”) possible to achieve government objectives (“risk reduction”). On the contrary, the DSA mandates that measures be adapted to the magnitude of the risks identified.
Another shortcoming that the Guidelines share with the DSA is the vagueness of some key terms. The lack of a legal definition for “systemic risk,” a central concept in the framework of the law, creates difficulties throughout the system and leaves the platforms and other stakeholders (auditors, civil society, academia) without concrete parameters to work with or a common language. Additionally, it grants the Commission an enormous degree of discretion in its application. Several specific risks identified are extremely vague, such as the negative effects on civic discourse and electoral processes.
The guidelines aimed to shed light on key concepts and set concrete standards for the upcoming electoral period. However, they do not define what should be understood by key terms such as “disinformation,” “extremist speech,” or “integrity of the electoral process.” The use of open-ended concepts like these lacks the necessary specificity to guide the actions of the platforms.
Among other things, it is essential to distinguish between legal and illegal content, as the remedies that the Commission may recommend for each are different. In the absence of a clear distinction, some of the measures suggested in the guidelines create the wrong incentives for companies. For example, the suggestion to demonetize content that has been declared “false” by independent fact-checkers or that comes from accounts that frequently publish falsehoods could constitute an indirect form of state censorship. Additionally, the lack of a clear definition of “FIMI” and the extreme vagueness of the definitions that have been adopted so far (despite efforts to define it) is problematic and encourages the silencing of critical voices from abroad.
Finally, the Guidelines include a section dedicated to the exchange of information between platforms and between platforms and states. This measure was harshly criticized by civil society during the consultation period and was subsequently revised to include recommendations for transparency in communications between states and platforms, as well as recommending the involvement of Digital Services Coordinators in the communications. Despite this, the suggestion that states could inform platforms about “possible risks to the electoral process, which may inform mitigation measures,” could lead to sophisticated forms of pressure (“jawboning").
V. Looking to the Future
Overall, many of the remedies proposed by the European Commission are correct and proportionate, such as those that advocate for a greater circulation of information, digital literacy, the implementation of specific equipment, and transparency.
As highlighted, there are other aspects that deserve closer examination and should be replaced. The proximity of the European Parliament elections may have conspired against a more thorough drafting of these guides.
However, the publication of the Guidelines only marks the beginning of the process. They recommend that VLOPS prepare post-election reports that analyze the effectiveness of the risk mitigation measures implemented, including compliance with metrics and other internal objectives, lessons learned, and areas for improvement. The Commission also recommends that these reports have a public version.
Fortunately, the Guidelines foresee their revision by the European Commission, which will take into account “the practical experience gained and the speed of technological, social and regulatory developments in the area.” It will be essential, therefore, to pay attention to how the platforms act during the electoral period, the investigations and processes that the Commission initiates regarding the application of these rules, and the information that emerges from the public versions of the companies' reports, to positively influence the modification of the Guidelines in the right direction.