Yielding AI Systems are designed to defer to human judgment or control, especially in critical situations. This concept emphasizes the importance of human oversight in AI applications, ensuring that AI does not override human decisions and that accountability is maintained.
Recommended Citation: "Yielding AI Systems" In AI & Human Rights Index, edited by Nathan C. Walker, Dirk Brand, Caitlin Corrigan, Georgina Curto Rex, Alexander Kriebitz, John Maldonado, Kanshukan Rajaratnam, and Tanya de Villiers-Botha. New York: All Tech is Human; Camden, NJ: AI Ethics Lab at Rutgers University, 2025. Accessed February 19, 2025. https://aiethicslab.rutgers.edu/glossary/yielding-ai-systems/.