Eugenics and AI refers to the use of artificial intelligence in ways that revive the harmful idea that human beings can be ranked, improved, or optimized according to supposed biological value. It describes the belief that technology can or should determine which human traits are desirable and which people are more worthy of opportunities, safety, or life itself.
AI systems can reflect eugenic thinking when their design assumes that human life should move toward a single “better” or “more advanced” form. In this view, technology becomes a tool for shaping people to fit a narrow standard of intelligence, health, ability, or social worth. Efforts to “optimize” or “enhance” human life through AI can slip into eugenic logic when they treat certain bodies, minds, or identities as flawed and in need of correction or replacement. This mindset denies the inherent dignity and equal worth of all people.
Modern AI can reinforce these harms through data, algorithms, or evaluation systems that encode biased ideas about who has potential and who does not. When models learn from data shaped by racism, ableism, sexism, or other forms of discrimination, they can create hierarchies that mirror the false belief that some groups should be elevated while others are pushed aside. These patterns echo the core eugenic claim that human populations can be engineered for the “greater good,” a claim that has caused devastating human rights abuses.
Eugenic uses of AI violate fundamental rights, including equality, nondiscrimination, and human dignity. They are incompatible with responsible governance and must be rejected as unethical and unlawful. These concerns connect closely to algorithmic bias and automated decision-making, concepts that help explain how technology can reproduce old hierarchies in new forms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.