Skip to content

Rutgers.edu   |   Rutgers Search

Humans First Fund
  • About
    • Students
    • People
    • Our Values
  • Research
    • AI & Human Rights Index
      • Purpose
      • Human Rights
      • Principles
      • Instruments
      • Sectors
      • Glossary
      • CHARTER
      • Editors’ Desk
    • Project Insight: Democratizing AI Ethics
    • AI Principles & U.S. Presidents
    • Declarations of Interdependence
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
    • Algorithmic Science
  • Programs
  • Publications
  • News
  • Get Involved
    • For Students
    • For Experts
    • For Philanthropists
  • About
    • Students
    • People
    • Our Values
  • Research
    • AI & Human Rights Index
      • Purpose
      • Human Rights
      • Principles
      • Instruments
      • Sectors
      • Glossary
      • CHARTER
      • Editors’ Desk
    • Project Insight: Democratizing AI Ethics
    • AI Principles & U.S. Presidents
    • Declarations of Interdependence
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
    • Algorithmic Science
  • Programs
  • Publications
  • News
  • Get Involved
    • For Students
    • For Experts
    • For Philanthropists
  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

Guardrails

Guardrails are agreed limits and controls for AI that keep systems within safe, lawful, and ethical bounds. They combine policies, technical safeguards, and human oversight to prevent or reduce harm and to align AI with human rights.

Guardrails matter because they set enforceable expectations for accountability, transparency, and privacy, as well as compliance with the law. They help ensure that people can understand and challenge automated decisions, that power is not concentrated in hidden tools, and that dignity and equality are protected. In the public interest, strong guardrails are not optional; they are a duty for anyone who designs, buys, or deploys AI.

Effective guardrails are clear, testable, and updated over time. They rely on independent checks, such as risk assessment , ongoing governance , and technical measures that improve fairness , reliability, and safety . When guardrails fail or are missing, institutions must halt or change systems and provide remedies.

 


Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.

  • Rutgers.edu
  • New Brunswick
  • Newark
  • Camden
  • Rutgers Health
  • Online
  • Rutgers Search
AI Ethics Lab
  • Mission
  • Values
  • People
  • Courses
  • Programs
  • Publications
  • News
  • Get Involved
  • Style Guide
AI & Human Rights Index
  • Purpose
  • Rights
  • Principles
  • Instruments
  • Sectors
  • Glossary
Project Insight
Moral Imagination Exchange
Humans First Fund

Dr. Nathan C. Walker
Principal Investigator, AI Ethics Lab

Rutgers University-Camden
College of Arts and Sciences
Department of Philosophy & Religion
429 Cooper Street
Camden, NJ 08102

Copyright © 2026, Rutgers, The State University of New Jersey

Rutgers is an equal access/equal opportunity institution. Individuals with disabilities are encouraged to direct suggestions, comments, or complaints concerning any accessibility issues with Rutgers websites to accessibility@rutgers.edu or complete the Report Accessibility Barrier / Provide Feedback Form.