In this solo talk at Ai4, North America’s largest artificial intelligence industry event, Dr. Nathan C. Walker addressed the fact that AI decision-making can separate the people who build systems from those affected by them. This separation can be physical, psychological, or procedural, he explained.
Physical distance appears in the environmental impacts of technology. When AI is developed in one country but deployed in another, it affects people who have no say in matters that concern them.
Psychological separation weakens empathy for those affected by one’s work. Harms can be hidden from view, making them easier to ignore and harder to see oneself as part of the moral story.
Procedural separation arises from the complexity of AI’s lifecycle. With so many people involved in design, development, and deployment, accountability is blurred. This “problem of many hands” makes it unclear who is responsible for which decisions.
“How might we close the gaps in this moral distance?” he asked. “We can begin by engaging our moral imagination, a practice I describe in my book Cultivating Empathy.”
Walker explained, “Moral imagination is the ability to place yourself in the middle of an ethical dilemma and see it from every perspective. It is a civic virtue expressed when you extend empathy to someone with whom you disagree. It models seeking understanding while recognizing that understanding does not require agreement. When you apply moral imagination, you help disrupt the scripts that deepen our divisions. Why does this matter for the design and development of AI? This practice equips us to create more empathetic and accountable technologies that serve all of humanity.”