This glossary entry is in the Edition 3.0 Review stage. Additional work is needed. Please use the form at the bottom of the page to recommend improvements.
Human Agency refers to the capacity of individuals to make autonomous decisions and exercise control over their own actions and life choices. In the context of artificial intelligence (AI), human agency emphasizes the importance of ensuring that humans, rather than AI systems, maintain the ultimate authority in decision-making processes, particularly those that affect human rights, dignity, and well-being.
The concept of human agency is fundamental in AI ethics and law as it underpins the principles of autonomy, accountability, and control. As AI systems become increasingly embedded in decision-making processes across various domains—such as healthcare, criminal justice, finance, and employment—there is a growing concern that AI could undermine human agency if not properly governed. Ensuring that humans retain the ability to oversee, influence, and reverse AI-driven decisions is vital to preserving their autonomy and dignity.
Key ethical considerations around human agency in the context of AI include:
- Control and Oversight: Human users should be able to exercise meaningful control over AI systems. This means that AI technologies should be designed to provide humans with clear options and the ability to make informed choices, especially in situations where AI may impact individual rights or freedoms.
- Informed Consent: Human agency also requires that individuals are fully informed about the role of AI in decisions that affect them, and they should have the ability to consent (or withhold consent) regarding how AI is used.
- Accountability: Ensuring human agency is maintained requires clear accountability mechanisms. Humans should remain ultimately responsible for decisions made with or by AI systems, ensuring that AI does not function as an unaccountable authority.
- Human-Centric Design: The principle of human agency is also reflected in the emphasis on designing AI systems that augment, rather than replace, human capabilities. This approach is consistent with the ethical stance of “automating tasks, not jobs,” where AI supports human decision-making and labor rather than diminishing the role of humans altogether.
Human agency is essential for maintaining trust in AI systems and for ensuring that these technologies are developed and deployed in ways that respect human rights, autonomy, and societal values. Regulatory frameworks, such as the European Union's General Data Protection Regulation (GDPR), stress the importance of human oversight in automated decision-making processes, reflecting the centrality of human agency in ethical AI governance. Ensuring human agency is protected is a key component of creating AI systems that serve humanity's best interests without compromising individual autonomy or societal norms.