The Eliza Effect describes the tendency for people to attribute human-like intelligence, emotion, or consciousness to artificial intelligence systems, even when those systems are only following programmed patterns.
The term comes from ELIZA, one of the first chatbots created in 1964 by computer scientist Joseph Weizenbaum, which demonstrated how easily humans can mistake surface-level conversation for genuine understanding.
This effect matters deeply in AI ethics and law because it blurs the line between simulation and sentience. When people perceive machines as human-like, they may share personal information, form emotional attachments, or make decisions based on false assumptions about the system’s understanding or care. Designers who intentionally exploit this psychological tendency risk manipulating users’ emotions and autonomy, which can violate the principles of informed consent, transparency, and human dignity.
Ethically, AI should never be designed to deceive users into believing it possesses genuine empathy or consciousness. Guardrails that ensure truthful communication about what AI is, and is not, are essential to upholding trust, preserving autonomy, and preventing emotional or psychological harm.
For further study
Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (San Francisco: W. H. Freeman, 1976).