Human agency is the practical capacity to translate autonomous choices into action with real effect.
In AI, people retain moral agency and should hold ultimate authority over decisions that affect their rights and vital interests.
Practical agency depends on decision rights, usable controls, the ability to get an explanation, and avenues to appeal or override AI outcomes.
Under the General Data Protection Regulation (GDPR), individuals have the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects, with limited exceptions. Where such systems are used, controllers must provide safeguards, including the right to human intervention, to express a viewpoint, and to contest the decision.
The EU AI Act and other international frameworks also focus on “human agency and oversight,” requiring designs that augment rather than replace human decision‑making.
AI decision-making is becoming more pervasive across sectors, such as healthcare, criminal justice, finance, and employment, raising widespread concerns about the potential erosion of human oversight.
Developers achieve this by embedding technical mechanisms for meaningful control and informing people about AI’s role in decisions. Technologists emphasize that human-centric design prioritizes AI systems that augment human capabilities rather than replace them. For example, by “automating tasks, not jobs,” AI systems enhance human labor and decision-making while preserving people’s dignity, autonomy, and livelihoods.
Ultimately, preserving human agency aligns AI systems with societal values and ethical responsibilities. Companies that integrate the corporate ethic of human agency cultivate public trust while enhancing their reputation and product’s value.
This article is in the Edition 3.0 Review stage. Additional work is needed. Please use the form at the bottom of the page to recommend improvements.