Moral Distance in AI refers to the separation—physical, psychological, or procedural—between those who make AI-driven decisions and those who are directly affected by them. This gap often weakens empathy and accountability since decision-makers might overlook individual needs and the real-world impact of their choices.
Such distancing poses serious ethical concerns by obscuring the human element that underlies human dignity. When moral distance grows, transparency declines, and individuals lose a clear sense of how decisions are made or who should be held responsible. This undermines the trust needed for fair, respectful, and accountable AI systems. To combat moral distance, integrating human in the loop oversight and fostering dialogue with affected communities are crucial steps. Ethical frameworks like the ethics of care argue that AI developers and users must actively seek to understand and address people’s unique contexts, vulnerabilities, and concerns. Only by prioritizing compassion and genuine engagement can AI achieve its potential without sacrificing the well-being and rights of those it aims to serve.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.