Skip to content

Rutgers.edu   |   Rutgers Search

Humans First Fund
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

Sustained Ethical Ecosystems (SEE)

The responsible tech movement and the field of AI ethics often point to two benchmarks: AI systems that prevent harm (nonmaleficence) and promote good (beneficence).

But ethical AI is not the result of a single decision by a single actor, company, or country. It requires sustained ethical decision-making throughout the AI lifecycle across each sector within each society.

We achieve this by co-creating and co-maintaining Sustained Ethical Ecosystems (SEE) in every society, across all sectors, including arts, business, defense, education, energy, finance, health, law, social services, technology, transportation, and beyond.

This dynamic and collective process is not a result of checking a compliance box or conducting a one-time review. Rather, our ethical commitments are only as good as our ability to consistently cultivate our collective conscience, to raise one another to a higher ground.

In this spirit, we invite all stakeholders to participate in a sustained ethical decision-making process, recognizing that each of us has shifting commitments and interests. It's a process that integrates morals and values with safety protocols and legal obligations alongside humanitarian principles, measured by fair and objective standards.

Sustained Ethical Ecosystems also emphasizes the impacts on users and non-users with a particular duty of care for vulnerable populations and the environment.

Through those lived experiences, we evaluate the efficacy of our sustained ethical decision-making throughout the AI lifecycle, from design and financial investments to development, deployment, use, maintenance, and monitoring.

This dynamic collective process calls us to take shared responsibility to ensure that AI systems not only prevent harm but also benefit all.

  • Rutgers.edu
  • New Brunswick
  • Newark
  • Camden
  • Rutgers Health
  • Online
  • Rutgers Search
About
  • Mission
  • Values
  • People
  • Courses
  • Programs
  • News
  • Opportunities
  • Style Guide
Human Rights Index
  • Purpose
  • Human Rights
  • Principles
  • Sectors
  • Glossary
Project Insight
Moral Imagination
Humans First Fund

Dr. Nathan C. Walker
Principal Investigator, AI Ethics Lab

Rutgers University-Camden
College of Arts & Sciences
Department of Philosophy & Religion

AI Ethics Lab at the Digital Studies Center
Cooper Library in Johnson Park
101 Cooper St, Camden, NJ 08102

Copyright ©2025, Rutgers, The State University of New Jersey

Rutgers is an equal access/equal opportunity institution. Individuals with disabilities are encouraged to direct suggestions, comments, or complaints concerning any accessibility issues with Rutgers websites to accessibility@rutgers.edu or complete the Report Accessibility Barrier / Provide Feedback Form.