Advancing Responsible Technology
through Solutions-Based Research and Education

The AI Ethics Lab at Rutgers University is an international research initiative based in Camden, New Jersey, examining the ethical and legal implications of artificial intelligence from design and development to deployment, use, and monitoring.

The lab advances a solutions scholarship methodology that identifies ethical challenges while developing practical strategies to help leaders build technologies that benefit humanity and the environment.

From Moral Distance to Moral Imagination

“Is ‘smart’ the best we can imagine for this century’s technology? 

Or should we be striving for something more?

Many of the harms caused by artificial intelligence are not due to a lack of intelligence. They are failures of the imagination.

They come from the moral distance between those who build and regulate AI systems and the people whose lives they affect. 

Think of the distance between a product developed in one culture and the people living with its impact in another. Think of the psychological distance between those designing AI companions and the children who use them.

I founded the AI Ethics Lab at Rutgers University to help close those gaps.

How? We train leaders across sectors to practice moral imagination: the ability to step into an ethical dilemma and understand multiple perspectives. Through role-plays and case studies, leaders uncover blind spots before those moral gaps become real harms.

When we cultivate empathy, especially for the most vulnerable, we bridge that moral distance. We move from building smart tech to empowering humans to make wise choices.   

Dr. Nathan C. Walker, Founder

AI Ethics Lab, Rutgers University

From Rights to Code to Governance

The AI & Human Rights Index transforms eight decades of human rights law into a technical protocol that powers a continuous governance flywheel to evaluate and improve AI systems.

Step 1.
Build AI & Human Rights Index

Map the relationship among AI, human rights, and legal instruments across societal sectors, with special attention to vulnerable groups.

Step 2.
Design Technical Protocol

Translate the legal index into a structured semantic system for evaluating how AI systems may violate or advance human rights.

Step 3.
Activate Governance Flywheel

Enable a continuous feedback system that democratizes the monitoring and training of AI systems by integrating diverse perspectives to reduce real harms and advance human rights.

Together, these steps reflect the lab’s dual focus on protecting human dignity and strengthening governance across disciplinary and cultural contexts. This work supports the development of scalable, dynamic frameworks for human-centered AI governance and public engagement.

Democratizing
AI Ethics through Design

Project Insight addresses challenges in online behavior and technology use by introducing Self-Reflection Technology.

By helping individuals develop personalized ethical frameworks, it empowers users to self-regulate their digital lives.

The goal is to elevate human agency, enhance self-efficacy, and promote a more responsible digital culture.

Timely and Timeless Education

The AI Ethics Lab contributes to interdisciplinary teaching across undergraduate and graduate programs, equipping students to examine and address the ethical and legal implications of artificial intelligence. Courses include:

Honors College

Philosophy

Graduate Seminar

Computer Science

From Principles to Practice

The AI Ethics Lab translates research into practice through applied ethics programs that support leaders across sectors.

Academia

Advancing teaching, learning, and research on AI, ethics, and law through invited talks and collaborations, including engagements at Columbia University and Virginia Tech.

Government

Conducting legal research on local, state, national, and international approaches to artificial intelligence governance.

NGOs

Delivering talks and workshops for nonprofit organizations, including the New Jersey Institute for Social Justice and the International Rescue Committee.

Industry

Facilitating ethics-to-industry workshops that help founders and global technology leaders identify risks, reduce harms, and build trustworthy AI systems.

Training AI Founders

The Moral Imagination Exchange (MIX)
is an applied ethics program that trains founders
and leaders of AI-driven companies
to identify and mitigate ethical and legal risks.

Through case studies and Socratic seminars, participants develop their
Decision Infrastructure for the AI Lifecycle (DIAL),
a framework they present to stakeholders and investors as
evidence of their commitment to building trustworthy AI systems.

These programs reflect the lab’s solutions-based approach,
translating ethical principles into practice.

Get Involved

There are meaningful ways to engage with the AI Ethics Lab.

For Students

Join the lab as a Research Assistant or Teaching Assistant, or earn academic credit.

For Experts

Contribute your expertise to research, publications, and applied ethics initiatives.

For Philanthropists

Make a tax-deductible contribution to the Humans First Fund.