Two problems for using Human Rights as Benchmarks in Human Rights Impact Assessments
In the last few years, CELE has been looking at how the ICT sector uses Human Rights Impact Assessments (HRIA) and other due diligence practices to assess their own conduct. Through a technical report we built a genealogy of the concept that clearly situated the impact assessments within the United Nations’ voluntary turn, at the beginning of the century, on the relationship between business and human rights. This shift, contingent but understandable in a situation of girdlock that had been going on for decades, partly explains the development of a series of corporate practices that reveal, at least in appearance, some concern about the impact of corporate operations and services in the world in which they intervene.
The ICT sector seems inclined to adopt the HRIA tool, and similar practices, to assess their operations and services. This phenomenon occurs in a context of mounting pressure on powerful intermediaries on the Internet to voluntarily combat a series of discourses and phenomena that worries Western ruling elites, from misinformation to hate speech, undemocratic discourses, those that seek to discredit electoral processes or question their results, among others. This pressure puts companies in a tough spot: addressing these demands could force them to incur in human rights violations, since many states ask companies for more than they could achieve on their own. Both HRIAs and due diligence exercises can help companies gain a stronger footing when facing those demands.
The task presents, however, enormous difficulties. Here I would like to very briefly argue that there are two serious difficulties: one legal and one epistemological. The first is based on certain features of international human rights law, such as the use of vague or ambiguous language, and the existence of persistent and widespread disagreements (especially in the sub-field of freedom of expression). It also has to do with the essentially paradoxical nature of that right, whereby many free speech conflicts can be understood as free speech claims cut against other free speech claims. The second problem is linked to a strong knowledge gap between what we know about the technologies we use and the enthusiasm with which we use them. There is simply not enough empirical research to accurately demonstrate the effects of certain services on individuals and society. That absence presents a major obstacle to impact studies (because you cannot study what is only partially understood). This gap does not exist as a consequence of a particular entrenched difficulty in gaining knowledge about the technology itself or its social usage. It is much more mundane and ordinary and has to do with at least three different things: the pace of technological evolution, adoption, and change; the challenges of empirical research at a global scale; and the unequal distribution of resources available to resource-intensive research between the global North and the global South.
The question of how experts who carry out impact studies deal with these difficulties seems relevant to us. Are these serious problems or can they be solved somehow? How do they deal with them? What strategies are deployed so that the effect of these problems on assessment processes are reduced to a minimum? How do experts deal with the lack of adequate empirical studies?
Beyond these practical difficulties, a path of reflection that seems promising runs on the genealogy of the HRIA and is projected on that history. This is more or less complex, but it could be briefly recounted in the following way. HRIAs, like other impact studies, are part of governance technologies developed in recent decades that seek to provide rationality to certain behaviors (in this case, of corporations). They seek to measure the world, predict impacts, reduce negative consequences, or plan for appropriate remedies. It is a form of governance, a device that seeks to make the world more manageable and predictable. And while it is essentially a corporate practice, it uses international human rights law as a benchmark to guide the analysis. Thus, the HRIAs incorporate a normative element that is absent in other impact studies or where it is not so central.
Moving in that direction, HRIAs encounter the above-mentioned problems of legal uncertainty but without access to the tools to solve them. These are typical of the law and are reserved for public officials: interpretation or adjudication are actions that acquire a special meaning when they are linked to the coercive power of the state. On the other hand, HRIAs advance on the basis of a series of information inputs that are more like those received by representatives in the legislative process (generic, collected from experts and relevant stakeholders, intended to inform general criteria) than those generated in the judicial process (adversarial, with a highly regulated evidence production process, and so on).
A possible strategy to face these difficulties could propose a greater juridification of impact studies. Following Tom Ginsburg and Albert Chen, I would define juridification as «the spread of legal discourse and procedures into social and political spheres where it was previously excluded». This process is present in the ICT sector: Facebook’s Oversight Board is nothing more than the construction of a legal structure endowed with certain characteristics (independence, a mandate to rely on human rights standards) to help Meta make better decisions (or more «legitimate») decisions. Similarly, content moderation for years has embraced practices that imitate typical legal forms: grievance mechanisms, appeal systems, and transparency reports, among others. HRIAs could follow a similar path if they somehow incorporate the legal practices of interpretation and adjudication to address the challenge of legal uncertainty, or ex post facto evidence-gathering mechanisms such as—for example—the one developed by BSR on the impact of Meta in Israel and Palestine due to the events of May 2021. This path of juridification implies costs: it makes the tool more complex, more rigid and—for that reason—less attractive as a mechanism of self-governance. I see, then, reasons why companies would not follow this path. But it would be necessary, since this would imply taking seriously the human rights that are invoked, not only as a benchmark but as legally binding normative mandates.