Beyond Ethics: Why a Rights-Based Framework is Essential for AI Governance

In recent years, the issue of AI governance has gained significant attention, with various proposals and initiatives aiming to regulate and address the ethical implications of artificial intelligence. In this case the Office of the UN Secretary General’s Envoy on Technology issued a call for papers on global AI governance and we wrote a submission in response.

In this blog post, we will summarize the key message CELE’s submission to this process –that you can read in full here. We maintain that a broad and generalist approach that treats AI as a separate ethical issue may be problematic for human rights. To ensure a comprehensive and nuanced approach to AI governance, it is crucial to prioritize a specific and rights-based framework within the United Nations (UN) processes.

From general tech-centered approaches to specific rights-based mechanisms.

David Kaye’s approach, as highlighted in his 2018 report as Special Rapporteur on the right to freedom of opinion and expression, stands out for its human rights perspective. By linking AI to specific rights, such as freedom of expression, we can better understand the nuanced aspects of how AI can impact us. This approach underscores the importance of narrower and more tailored approaches that build upon ongoing conversations and expertise.

A rights-based approach to AI governance requires a more granular and specific legal framework rather than a comprehensive and general one. Instead of categorizing AI as a single issue, we should recognize that AI encompasses various technologies with different implications. Different aspects, such as the impact on freedom of expression derived from large language models or the humanitarian consequences of the use of autonomous weapons demand tailored rights-based approaches. It is crucial to avoid creating a fixed hierarchy of risks and acknowledge the context-specific nature of AI-related challenges. A risk-based approach fosters problem-solving perspectives, but it should not overshadow the potential impact on human rights. Putting rights first is essential for a human and protection-centered approach to AI governance.

Institutional Design for a Rights-Based Governance Process

To effectively implement a rights-based approach within UN processes, two options can be considered. Firstly, existing fora dedicated to freedom of expression, privacy, and data protection, such as UNESCO and the system of Rapporteurs, can be utilized. Leveraging the expertise and recommendations of these fora, specific AI risks can be assessed from a human rights perspective, leading to nuanced recommendations and guiding principles. Collaboration between different approaches and fora can enrich the process.

Alternatively, a dedicated forum modeled after successful examples like the G20 or ICANN could be established. This forum would bring together experts selected by the international community, including specialized human rights offices and Rapporteurs. Organized into tracks focused on specific rights, these experts would lead discussions and ensure the incorporation of human rights standards. Civil society, academia, and the private sector should also be actively involved to ensure diverse perspectives are considered.

In conclusion, approaching AI governance within UN processes requires a delicate balance between comprehensive regulations and a rights-based approach. By prioritizing human rights, we can ensure that AI technologies are developed and deployed in a manner that respects fundamental rights and freedoms. Drawing from the expertise of existing human rights fora or establishing a specific forum, we can navigate the complexities of AI governance effectively. We expect the Office of the UN Secretary General’s Envoy on Technology to consider our submission and use the opportunity created by the consultation process to foster a more inclusive, responsible, and rights-centered AI ecosystem.