AI & Human Rights Index

The AI & Human Rights Index is a global research project of the AI Ethics Lab at Rutgers University, to be published by All Tech Is Human, that focuses on how artificial intelligence can violate or advance human rights.

Editors from four continents are working with contributors worldwide to develop a comprehensive legal framework that serves as a guiding structure for evaluating AI’s impact on human rights across societal sectors.

The Index draws on eight decades of human rights law to organize the relationships among rights, the instruments used to measure and enforce those rights, the AI principles that illuminate gaps in existing law, their application across distinct societal sectors, and AI’s negative and positive impacts, especially on vulnerable populations.

The Index emphasizes both harm prevention (nonmaleficence) and measurable positive outcomes (beneficence), holding societal sectors accountable for ensuring that AI benefits humanity and the environment.

Index researchers apply a Solutions Scholarship methodology to interrogate and critique AI while committing to identifying measurable solutions for every problem analyzed, embedding solutions-focused research into each article. Ultimately, the Index fosters technical, cultural, and legal literacy about how AI intersects with human rights through multimedia encyclopedia entries that are accessible to the public.

The Index applies what the United Nations calls a “vulnerability lens.”

In advancing this mission, the Index is organized around the following research questions. 

Human Rights

Which specific human rights are most at risk and which ones could AI benefit? What cultural contexts and regional approaches to human rights are used to interpret these risks? 

 Vulnerable Populations

Which vulnerable populations, already susceptible to human rights abuses, are most affected by AI systems? How can AI violate or protect their fundamental rights?

Principles

What ethics, guiding principles help create responsible AI? How do these principles intersect with laws and professional codes of practice across industries?

Instruments

What international laws and frameworks can be used to assess AI’s impact on human rights at every stage of its lifecycle—from development and deployment to monitoring?

Sectors

Which sectors of society, across all cultures, are responsible for ensuring that AI systems protect and promote human rights?

Glossary

What legal, technical, and ethical terms are essential for engaging in this interdisciplinary, global conversation about responsible AI?

From a Legal Index

to a Technical Protocol

to a Governance Flywheel

To recap, the Index draws on eight decades of human rights law to organize the relationships among rights, instruments, principles, sectors, and vulnerable populations. Researchers at the AI Ethics Lab will then translate this legal framework into a Protocol, a machine-readable technical taxonomy, with the ultimate objective of building the AI & Human Rights Governance Flywheel called CHARTER.

Protocol

The Protocol is a semantic infrastructure that uses SKOS and OWL to translate the Index into a structured technical system for evaluating how AI systems may both violate and advance human rights.

SKOS enables us to organize concept schemes, label related ideas, identify semantic relationships, and prepare documentary notes.

OWL allows us to articulate formal ontologies, define logical constraints, create inference-based class definitions, and specify property restrictions. We are careful not to overuse OWL, because human rights norms are continually contested and reinterpreted.

CHARTER

We then activate CHARTER: Classifying Human-Rights Advancements for Responsible Technology and Ethical Response, a continuous feedback system that serves as our Governance Flywheel. 

CHARTER begins with (a) Expert AI Trainers from around the world applying structured rubrics through Reinforcement Learning from Human Feedback. We then (b) gather real-world evidence from audits, NGO reports, academics, and cross-cultural feedback. This evidence is used to (c) revise the frameworks, (d) align the models accordingly, and (e) release updated versions of the Index and the Protocol.

Collaborators

The publicationAI & Human Rights Index, is the result of an international collaboration among individual researchers affiliated with

Recommended Citation: Nathan C. Walker, Dirk Brand, Caitlin Corrigan, Georgina Curto Rex, Alexander Kriebitz, John Maldonado, Kanshukan Rajaratnam, Tanya de Villiers-Botha, and Hisham Zawil, eds. AI & Human Rights Index. New York: All Tech is Human; Camden, NJ: AI Ethics Lab at Rutgers University, 2026. aiethicslab.rutgers.edu/index.