Skip to content

Rutgers.edu   |   Rutgers Search

Humans First Fund
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

Eliza Effect

The Eliza Effect describes the tendency for people to attribute human-like intelligence, emotion, or consciousness to artificial intelligence systems, even when those systems are only following programmed patterns.

The term comes from ELIZA, one of the first chatbots created in 1964 by computer scientist Joseph Weizenbaum, which demonstrated how easily humans can mistake surface-level conversation for genuine understanding.

This effect matters deeply in AI ethics and law because it blurs the line between simulation and sentience. When people perceive machines as human-like, they may share personal information, form emotional attachments, or make decisions based on false assumptions about the system’s understanding or care. Designers who intentionally exploit this psychological tendency risk manipulating users’ emotions and autonomy, which can violate the principles of informed consent, transparency, and human dignity.

Ethically, AI should never be designed to deceive users into believing it possesses genuine empathy or consciousness. Guardrails that ensure truthful communication about what AI is, and is not, are essential to upholding trust, preserving autonomy, and preventing emotional or psychological harm.

 


For further study

Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (San Francisco: W. H. Freeman, 1976).

 


 

  • Rutgers.edu
  • New Brunswick
  • Newark
  • Camden
  • Rutgers Health
  • Online
  • Rutgers Search
About
  • Mission
  • Values
  • People
  • Courses
  • Programs
  • News
  • Opportunities
  • Style Guide
Human Rights Index
  • Purpose
  • Human Rights
  • Principles
  • Sectors
  • Glossary
Project Insight
Moral Imagination
Humans First Fund

Dr. Nathan C. Walker
Principal Investigator, AI Ethics Lab

Rutgers University-Camden
College of Arts & Sciences
Department of Philosophy & Religion

AI Ethics Lab at the Digital Studies Center
Cooper Library in Johnson Park
101 Cooper St, Camden, NJ 08102

Copyright ©2025, Rutgers, The State University of New Jersey

Rutgers is an equal access/equal opportunity institution. Individuals with disabilities are encouraged to direct suggestions, comments, or complaints concerning any accessibility issues with Rutgers websites to accessibility@rutgers.edu or complete the Report Accessibility Barrier / Provide Feedback Form.