Skip to content

Rutgers.edu   |   Rutgers Search

Humans First Fund
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

Agency

Human agency is the practical capacity to translate autonomous choices into action with real effect.

In AI, people retain moral agency and should hold ultimate authority over decisions that affect their rights and vital interests.

Practical agency depends on decision rights, usable controls, the ability to get an explanation, and avenues to appeal or override AI outcomes.

Under the General Data Protection Regulation (GDPR), individuals have the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects, with limited exceptions. Where such systems are used, controllers must provide safeguards, including the right to human intervention, to express a viewpoint, and to contest the decision.

The EU AI Act and other international frameworks also focus on “human agency and oversight,” requiring designs that augment rather than replace human decision‑making.

AI decision-making is becoming more pervasive across sectors, such as healthcare, criminal justice, finance, and employment, raising widespread concerns about the potential erosion of human oversight.

Developers achieve this by embedding technical mechanisms for meaningful control and informing people about AI’s role in decisions. Technologists emphasize that human-centric design prioritizes AI systems that augment human capabilities rather than replace them. For example, by “automating tasks, not jobs,” AI systems enhance human labor and decision-making while preserving people’s dignity, autonomy, and livelihoods.

Ultimately, preserving human agency aligns AI systems with societal values and ethical responsibilities. Companies that integrate the corporate ethic of human agency cultivate public trust while enhancing their reputation and product’s value.

 


This article is in the Edition 3.0 Review stage. Additional work is needed. Please use the form at the bottom of the page to recommend improvements.

  • Rutgers.edu
  • New Brunswick
  • Newark
  • Camden
  • Rutgers Health
  • Online
  • Rutgers Search
About
  • Mission
  • Values
  • People
  • Courses
  • Programs
  • News
  • Opportunities
  • Style Guide
Human Rights Index
  • Purpose
  • Human Rights
  • Principles
  • Sectors
  • Glossary
Project Insight
Moral Imagination
Humans First Fund

Dr. Nathan C. Walker
Principal Investigator, AI Ethics Lab

Rutgers University-Camden
College of Arts & Sciences
Department of Philosophy & Religion

AI Ethics Lab at the Digital Studies Center
Cooper Library in Johnson Park
101 Cooper St, Camden, NJ 08102

Copyright ©2025, Rutgers, The State University of New Jersey

Rutgers is an equal access/equal opportunity institution. Individuals with disabilities are encouraged to direct suggestions, comments, or complaints concerning any accessibility issues with Rutgers websites to accessibility@rutgers.edu or complete the Report Accessibility Barrier / Provide Feedback Form.