OSINT and Artificial Intelligence: A Regional Perspective on an Explosive Combination
On Friday, July 26, 2024, through Resolution 710 / 2024, the Argentine Ministry of Security created the Artificial Intelligence Unit Applied to Security, with the aim of detecting, investigating and prosecuting “crime and its connections” using Artificial Intelligence. From the Center for Studies on Freedom of Expression (CELE) of the University of Palermo (Argentina), we are closely monitoring these developments due to their potential impact on the privacy and freedom of expression rights of the population. Our concern is not limited to the Argentine case, as the disproportionate increase instate surveillance capabilities without any political or citizen oversight is a trend casting its shadow over the entire region.
Among the functions of the newly established unit are the monitoring of "open social networks, applications and websites, as well as the so-called “Deep Web” or “Dark Web,” for “the investigation of crimes and identification of their perpetrators” and “the detection of situations of serious risk to security,”the analysis of“social media activities to detect potential threats, identify movements of criminal groups or anticipate unrest,” the real-time analysis of security cameras to “detect suspicious activities or identify wanted individuals using facial recognition,” the prediction and prevention of future crimes through the analysis of historical crime data using machine learning algorithms, the detection and prevention of cyberattacks, the processing of “large volumes of data from various sources to extract useful information and create profiles of suspects or identify links between different cases,” and the sharing of information among different security forces.
In May, the Ministry of Security of Argentina had already reintroduced intelligence tasks in open sources (OSINT) for surveillance purposes, under the euphemism of “cyber patrolling.”The characteristics that differentiate OSINT from mere "patrolling" are its purpose (not for public safety but for information gathering and intelligence production), secrecy (security forces patrol in uniform and are identifiable, while intelligence activities are conducted covertly), and specificity (patrolling focuses on areas, while intelligence activities target specific intelligence objectives).
The measure is part of a growing tendency among states to establishmass surveillance systems of their citizens. The resolution itself acknowledges this trend by citing the United States of America, China, the United Kingdom, Israel, France, Singapore and India as successful cases of the application of AI to the security field. The governments in the region are not immune to this trend. At the Center for Studies on Freedom of Expression of the University of Palermo (CELE), we coordinate the mapping of regulations and practices related to the state use of OSINT for surveillance in five countries in the region ( Argentina, Brazil, Colombia, Mexico and Uruguay). The project concluded with the development of a comparative regional report.
The conclusions of that report remain fully relevant and applicable to the current situation. At that time, we already pointed out that in many states in the region, OSINT is practiced outside the law, in the absence of rules that establish its requirements, limits, or the subsequent handling of the information collected. And where there are rules enabling OSINT practices, they do not comply with the principle of legality required by international human rights standards, which demand that any restrictions on the enjoyment of rights be established through a formal law passed by Congress.
The implications of the practice of OSINT on the right to privacy are enormous and carry potential impacts on the exercise of the human right to freedom of expression. People tend to modify their behavior if they believe they are being watched, especially if there is a well-founded suspicion that their speech could be subject to state reprisals. Since the internet is where much of the public exchange takes place, the possibility that users may self-censor their online interactions for fear of being monitored threatens to undermine the vigor of the debate necessary in a democratic society.
In all the countries analyzed in our report , there is a great deal of secrecy surrounding the acquisition of OSINT technologies and their application against specific individuals. This is problematic in several ways. On the one hand, secrecy hinders the accountability of the officials who order the implementation of these policies before the public. On the other hand, it makes it practically impossible to defend against specific instances of surveillance on individual citizens through judicial challenges. Finally, the lack of regulation regarding the use of the information obtained has led to all kinds of state abuses, such as using it to create profiles of academics, journalists, political leaders, protesters, and activists opposed to the government.
To prevent the violation of rights through OSINT practices, all state action must comply with the three-part test developed in the International Human Rights System, ensuring that any surveillance measure on the population is legal, necessary, and proportionate. The use of artificial intelligence tools in combination with surveillance techniques appears concerning in light of these standards, particularly when it comes to facial recognition technologies (which, in addition to being disproportionately invasive to the right to privacy, present a higher rate of false positives when it comes to women and racialized individuals).
For its part, the use of AI for the massive processing of large amounts of data for the “prediction and prevention of future crimes” using “statistical data” seems to lead inevitably to lead to the criminalization and increased surveillance of the most vulnerable sectors of the population. This is due to the reproduction of class and racial biases rooted in police and criminal justice systems, which are also reflected in the materials that will be used to train the algorithms.
The regulation of OSINT activities must include the creation of specific protocols by law that govern the collection, processing, purpose of use, and elimination of information obtained from open sources. These activities should be regulated under personal data protection laws, allowing data subjects to exercise their rights to access, rectify, cancel, and object. Political and citizen oversight mechanisms for these practices should be established, such as the publication of budget allocations and state contracts for the acquisition of OSINT surveillance programs and services ((which are usually hidden from public scrutiny for “national security” and similar reasons), as well as specific obligations for periodic reporting, including the publication of statistics and transparency reports with qualitative and quantitative indicators. It is important that the laws include notification to citizens who have been monitored using these technologies and the use that was given to that information,prior judicial oversight, and the establishment of clear sanctions for those who commit abuses.
Finally, it is necessary to require that, prior to regulating the use of OSINT and artificial intelligence systems, extensive public consultations be conducted with the participation of academia and civil society, and that studies on the impact of these technologies on human rights (mainly privacy and freedom of expression) be carried out and published. Such studies should be carried out by independent entities and repeated before introducing new systems and technologies.
(Originally published in Political Animal, dated August 17, 2024)