This glossary entry is in the Edition 3.0 Review stage. Additional work is needed. Please use the form at the bottom of the page to recommend improvements.
The Ability to Appeal is an ethical and legal principle in artificial intelligence (AI) that ensures individuals have the right to challenge decisions made by AI systems. This principle is crucial when an AI system's decision significantly impacts a person's rights or interests. It emphasizes that those affected by automated decisions should have access to mechanisms—such as human review or judicial processes—to contest and seek redress for those decisions.
This concept is closely linked to the theme of Human Control of Technology and often overlaps with the principle of the "right to human review of an automated decision." Some ethical frameworks combine these principles, highlighting the necessity of human oversight as an additional layer of accountability in AI decision-making processes. For instance, the Access Now report describes the human-in-the-loop approach as a means to enhance accountability.
In detailed analyses, like the one provided by Access Now, the Ability to Appeal encompasses two key rights:
- The right to challenge the use of an AI system: Individuals can contest the deployment of AI in decision-making processes that affect them.
- The right to appeal decisions made or informed by an AI system: Individuals can seek a review or reversal of specific decisions that have been influenced by AI.
These rights may be exercised through formal channels such as judicial reviews or other accessible procedures designed to uphold individuals' rights. Some guidelines limit the Ability to Appeal to "significant automated decisions," recognizing that not all AI-driven outcomes require the same level of scrutiny.
An essential aspect of this principle is ensuring that individuals are informed about the procedures available to exercise their rights. Enhancing the accessibility and awareness of these channels is vital for the effective implementation of the Ability to Appeal. To facilitate this, organizations like the OECD and G20 recommend that AI systems provide outcomes based on "plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation, or decision." This transparency empowers individuals to understand and effectively challenge AI decisions that affect them.
For Further Reading
Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020.