Guardrails are agreed limits and controls for AI that keep systems within safe, lawful, and ethical bounds. They combine policies, technical safeguards, and human oversight to prevent or reduce harm and to align AI with human rights.
Guardrails matter because they set enforceable expectations for accountability, transparency, and privacy, as well as compliance with the law. They help ensure that people can understand and challenge automated decisions, that power is not concentrated in hidden tools, and that dignity and equality are protected. In the public interest, strong guardrails are not optional; they are a duty for anyone who designs, buys, or deploys AI.
Effective guardrails are clear, testable, and updated over time. They rely on independent checks, such as risk assessment , ongoing governance , and technical measures that improve fairness , reliability, and safety . When guardrails fail or are missing, institutions must halt or change systems and provide remedies.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.