Skip to content

Rutgers.edu   |   Rutgers Search

Humans First Fund
  • About
    • Students
    • People
    • Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • About
    • Students
    • People
    • Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

Guardrails

Guardrails are agreed limits and controls for AI that keep systems within safe, lawful, and ethical bounds. They combine policies, technical safeguards, and human oversight to prevent or reduce harm and to align AI with human rights.

Guardrails matter because they set enforceable expectations for accountability, transparency, and privacy, as well as compliance with the law. They help ensure that people can understand and challenge automated decisions, that power is not concentrated in hidden tools, and that dignity and equality are protected. In the public interest, strong guardrails are not optional; they are a duty for anyone who designs, buys, or deploys AI.

Effective guardrails are clear, testable, and updated over time. They rely on independent checks, such as risk assessment , ongoing governance , and technical measures that improve fairness , reliability, and safety . When guardrails fail or are missing, institutions must halt or change systems and provide remedies.

 


Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.

  • Rutgers.edu
  • New Brunswick
  • Newark
  • Camden
  • Rutgers Health
  • Online
  • Rutgers Search
About
  • Mission
  • Values
  • People
  • Courses
  • Programs
  • News
  • Opportunities
  • Style Guide
Human Rights Index
  • Purpose
  • Human Rights
  • Principles
  • Sectors
  • Glossary
Project Insight
Moral Imagination
Humans First Fund

Dr. Nathan C. Walker
Principal Investigator, AI Ethics Lab

Rutgers University-Camden
College of Arts & Sciences
Department of Philosophy & Religion

AI Ethics Lab at the Digital Studies Center
Cooper Library in Johnson Park
101 Cooper St, Camden, NJ 08102

Copyright ©2025, Rutgers, The State University of New Jersey

Rutgers is an equal access/equal opportunity institution. Individuals with disabilities are encouraged to direct suggestions, comments, or complaints concerning any accessibility issues with Rutgers websites to accessibility@rutgers.edu or complete the Report Accessibility Barrier / Provide Feedback Form.