Human agency refers to a person’s capacity to make autonomous decisions and exercise control or guidance over their actions and life choices.
Human agency relates to the fundamental human right to self-determination, the ability to make one’s own decisions without external coercion. In the context of artificial intelligence (AI), human beings, not AI systems, must retain moral agency and ultimate authority in decision-making processes, particularly in areas affecting peace, privacy, and dignity. The ethic of human agency is foundational to AI ethics and law, underpinning and reinforcing interdependent principles such as autonomy, accountability, and consent.
AI decision-making is becoming more pervasive across sectors, such as healthcare, criminal justice, finance, and employment, raising widespread concerns about the potential erosion of human oversight.
In response, the European Union’s General Data Protection Regulation (GDPR) mandated human oversight in automated decision-making to preserve human control over AI systems. Developers achieve this by embedding technical mechanisms for meaningful control and informing people about AI’s role in decisions. Technologists emphasize that human-centric design prioritizes AI systems that augment human capabilities rather than replace them. For example, by “automating tasks, not jobs,” AI systems enhance human labor and decision-making while preserving people’s dignity, autonomy, and livelihoods.
Ultimately, preserving human agency aligns AI systems with societal values and ethical responsibilities. Companies that integrate the corporate ethic of human agency cultivate public trust while enhancing their reputation and product’s value.
This article is in the Edition 3.0 Review stage. Additional work is needed. Please use the form at the bottom of the page to recommend improvements.