In January 2020, after some informal reports of incidents of social media content related to the protests, three Chilean organizations, the Protected Data Foundation, the University of Chile and the Observatory of the Right to Communication, carried out a study on censorship on social networks between October 18 and November 22, 2019. The study documented 283 incidents of censorship on social networks in which content related to the protests was removed or blocked. In some cases, users who were active participants in the country's protest movement had their accounts terminated or suspended, with no possibility of timely appeal. According to the authors and other civil society organizations that followed up on the phenomenon, the main causes adduced were related to automation, lack of context and lack of clarity on the rules of the platforms.

Understanding how content standards are applied in Latin America is a constant challenge.

Although companies claim to have adopted global policies, the criteria differ from country to country and from region to region. The last two months have been very revealing in terms of these practices, as in the case of the differentiated treatment that misleading content received within the main platforms. For example, in March 2020, Twitter removed misleading posts about cures for COVID-19 posted by the Brazilian President Jair Bolsonaro, but appeared to be more tolerant of some similar tweets of the president of the United States, Donald Trump. Likewise, they kept Trump's deceptive tweets about possible election fraud, but tagged them with false content.

In 2017, the Center for Studies on Freedom of Expression and Access to Information (CELE) carried out his own research on Facebook, YouTube, and Twitter's measures to combat fake news and misinformation. Our intention was to track misinformation about announcements made by some companies on a global scale, particularly after some wide-spread events such as the Brexit vote, the Cambridge Analytica scandal and the Colombian referendum, and compare the measures announced and implemented in light of these events with those that had been implemented in Latin America.

We found that new policies were announced, sometimes on a daily basis, and that new tools, policies, and programs frequently overlapped or contradicted each other, making it difficult to assess what was actually being done and where. It was very difficult to find disaggregated information on implementations by country, and procedures and policies were not always translated into the local language, which made it impossible for some users to understand how their content was evaluated and what possible solutions they had. Some initiatives were implemented in different countries with varying levels of resources, leading to a disparity in their application. As researchers, it is very difficult for us to know for sure what impact high-profile global ads have on users in Latin America. CELE is finalizing a new study to update its 2017 report on the responses of platforms against disinformation, and on the 61 most prominent actions that were identified and analyzed in the document: The researchers failed to verify the application of at least 28 of such actions in Latin America.

All of this speaks to a broader issue around transparency, accountability, and access to information regarding the operations of the major internet platforms. While we acknowledge the efforts that have been made to improve transparency reporting over the past two years, it is still difficult to find disaggregated data for our region. You can't even get basic information about which policies apply where, or data about the regional and local impact of content moderation, and how the local context differs from the global one. Facebook y TwitterFor example, they recently started to provide more information about their content moderation practices, but they are still not geographically disaggregated. This profoundly undermines the ability of state and non-state actors to assess the social implications of private moderation of content at the local level.

Then, the lack of understanding about how content is moderated in Latin America and the lack of explanation of local contexts led to a stronger demand for regulation of the main platforms, both from governments and civil society. Although the intentions may be good, the governments of the region are more concerned with developing new restrictions on free expression than with reducing the space for freedom of expression, due to their perception that cyberspace remains very little regulated. The propagation of laws or proposals from Europe and the United States that are hostile to freedom of expression does not help, nor does the campaigns of many of the oldest democracies in the world, including those of Europe and United Kingdom, which pressure platforms to use their terms and conditions more aggressively to attack harmful but legal content. Although well intentioned, these initiatives promote diffuse and overly broad restrictions on freedom of expression on digital platforms. Indeed, in the context of a region with a long history of state censorship, these proposals can give political protection to governments seeking to adopt equally aggressive approaches to restrict speech online.

Latin American users, particularly those involved in activism or social movements, find themselves between a rock and a hard place. Within this context, it is more necessary than ever for Latin American civil society and activists to speak out in global debates on content moderation practices and freedom of expression.

 

By Augustina Del Campo

Image credit (Map of Latin America- Juan Downey)


This article was originally published here.. It is part of a series of works that make up the Wikimedia Initiative and the Yale Law School on Intermediaries and Information to gather perspectives on the local impact of content moderation in different countries and regions. You can access all the articles in the series through the blog or your Twitter account @YaleISP_WIII.