Human oversight is the process of monitoring, evaluating, and adjusting AI systems throughout their development and deployment to ensure alignment with human values, laws, and ethical standards. By actively guiding AI during training—such as providing feedback in supervised or reinforcement learning—and intervening in real time once deployed, humans can mitigate unintended outcomes and reinforce acceptable behavior. Techniques like reinforcement learning from human feedback — an approach that uses human ratings as a proxy for preference — is used by ChatGPT as a way to keep Humans in the Loop. Human oversight, however, is ineffective if preference data is biased. If AI systems surpass human intelligence, aligning them with human values will require new research breakthroughs. In any case, individuals and organizations must make good-faith efforts to responsibly oversee AI systems to ensure they operate ethically and in alignment with societal standards.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.