The COVID-19 pandemic and the political crisis following it in Brazil sparked legislative efforts to address disinformation issues and platform regulation. Although it is maybe well-intended and relevant, this effort puts online expression at risk in the country.

The first drafts of the bill on “freedom, liability, and transparency online” attempted to compel platforms to ban “inauthentic accounts”, “non-labeled” bots (“computer programs created to imitate, replace or facilitate human activities in the performance of repetitive tasks on apps” which are “not reported as such to the application provider neither to its users”), disinformation botnets (“set of bots whose activity is coordinated and articulated by a person or group of people, individual account, government or company in order to artificially impact the distribution of content in order to obtain financial and/or political gains”) and other kinds of disinformation-related behavior and content. The idea of removing platform immunity under certain conditions is somewhat similar to what was demanded to the Federal Communications Commission by Trump in his May 28 executive order. In addition to it, the first drafts endangered encryption, by demanding platforms to act on content shared in instant messaging apps. It also had very open-ended definitions – including a definition for “disinformation” – which could lead to legal uncertainty and risk of over-removals. At some point, even the creation of reputation rankings – which would affect the visibility of content – was discussed.

Several other drafts were proposed by different congressmen and a variety of regulatory approaches are now under debate. Some of them introduce transparency obligations for platforms and free speech guarantees in content moderation, which may be beneficial. However, the proposed bills also address sensitive topics that deserve attention:

Incompatibility with the intermediary liability regime set forth in the Brazilian Civil Rights Framework for the Internet (MCI): most drafts of the disinformation bill demand platforms to ban anonymous content, «inauthentic accounts», «unlabeled bots», among other types of content associated with disinformation, establishing heavy fines in case of non-compliance. Pursuant to the MCI, as a general rule, intermediaries may be rendered liable for content posted by their users only in case they fail to take content down after being notified of a court order requesting the removal of such content.

Risks for encryption and traceability in instant messaging applications: one of the pressing topics of the moment is content traceability, that is, the ability to identify the origin (the author) of content shared between instant messaging app users. Some of the drafts make it mandatory for intermediaries to store this data for up to one year. This might be technically feasible, but would result in massive data collection from users, which poses risks to their privacy.. This information could be used for political persecution purposes. Additionally, it violates the confidentiality of journalistic sources.

User identification
: one of the main issues raising concerns is the identification of users before allowing them to own a profile on social media. This would create great difficulties for the use of pseudonyms online and for people using social names, such as members of the trans community. Several drafts request platforms to collect user IDs, as well as proof of address, before allowing them to create an account. A new version of the bill requests users to disclose their cellphone numbers and demands platforms to suspend accounts linked to numbers disabled by telecom companies.

Criminalizing speech
: the most recent drafts of the bill introduce different kinds of criminal offenses – including new versions of crimes against honor and conceptual mistakes involving defamation, disinformation, and hate speech. The definitions of new criminal offenses under discussion are too broad and present risks for users who may eventually be criminalized for sharing disinformation content without malicious intent. Moreover, a new draft creates a new electoral offense, “sharing of manipulated political advertising to degrade or ridicule candidates”, which could subject the benefited candidate to a heavy fine and even to his removal or disqualification from the electoral race for office.

Besides all technical issues which could endanger users’ rights, the main flaw of the legislative effort is keeping relevant stakeholders, such as civil society, academia, and industry at bay – everything is moving quickly and with very little space for discussion.

In order to keep track of this discussion more closely, follow InternetLab on social media (@internetlabbr on Twitter, InternetLab on Facebook and internetlab on Instagram). A new version of the bill might be voted in the Senate next Thursday, June 25th, 2020 – again with very little time and space for discussion. In case you wish to contribute to the debate and help raise awareness around the issue, bring it to the attention of relevant stakeholders.

Thiago Dias Oliva, head of research on Freedom of Expression at InternetLab and researcher for the Legislative Observatory @CELEUP

Heloisa Massaro, head of research on Information and Politics at InternetLab.

«Photo Credit: Nijwam Swargiary«