Skip to content

Rutgers.edu   |   Rutgers Search

Humans First Fund
  • About
    • Students
    • People
    • Our Values
  • Research
    • AI & Human Rights Index
      • Purpose
      • Human Rights
      • Principles
      • Instruments
      • Sectors
      • Glossary
      • CHARTER
      • Editors’ Desk
    • Project Insight: Democratizing AI Ethics
    • AI Principles & U.S. Presidents
    • Declarations of Interdependence
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
    • Algorithmic Science
  • Programs
  • Publications
  • News
  • Get Involved
    • For Students
    • For Experts
    • For Philanthropists
  • About
    • Students
    • People
    • Our Values
  • Research
    • AI & Human Rights Index
      • Purpose
      • Human Rights
      • Principles
      • Instruments
      • Sectors
      • Glossary
      • CHARTER
      • Editors’ Desk
    • Project Insight: Democratizing AI Ethics
    • AI Principles & U.S. Presidents
    • Declarations of Interdependence
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
    • Algorithmic Science
  • Programs
  • Publications
  • News
  • Get Involved
    • For Students
    • For Experts
    • For Philanthropists
  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

Eliza Effect

The Eliza Effect describes the tendency for people to attribute human-like intelligence, emotion, or consciousness to artificial intelligence systems, even when those systems are only following programmed patterns.

The term comes from ELIZA, one of the first chatbots created in 1964 by computer scientist Joseph Weizenbaum, which demonstrated how easily humans can mistake surface-level conversation for genuine understanding.

This effect matters deeply in AI ethics and law because it blurs the line between simulation and sentience. When people perceive machines as human-like, they may share personal information, form emotional attachments, or make decisions based on false assumptions about the system’s understanding or care. Designers who intentionally exploit this psychological tendency risk manipulating users’ emotions and autonomy, which can violate the principles of informed consent, transparency, and human dignity.

Ethically, AI should never be designed to deceive users into believing it possesses genuine empathy or consciousness. Guardrails that ensure truthful communication about what AI is, and is not, are essential to upholding trust, preserving autonomy, and preventing emotional or psychological harm.

 


For further study

Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation (San Francisco: W. H. Freeman, 1976).

 


 

  • Rutgers.edu
  • New Brunswick
  • Newark
  • Camden
  • Rutgers Health
  • Online
  • Rutgers Search
AI Ethics Lab
  • Mission
  • Values
  • People
  • Courses
  • Programs
  • Publications
  • News
  • Get Involved
  • Style Guide
AI & Human Rights Index
  • Purpose
  • Rights
  • Principles
  • Instruments
  • Sectors
  • Glossary
Project Insight
Moral Imagination Exchange
Humans First Fund

Dr. Nathan C. Walker
Principal Investigator, AI Ethics Lab

Rutgers University-Camden
College of Arts and Sciences
Department of Philosophy & Religion
429 Cooper Street
Camden, NJ 08102

Copyright © 2026, Rutgers, The State University of New Jersey

Rutgers is an equal access/equal opportunity institution. Individuals with disabilities are encouraged to direct suggestions, comments, or complaints concerning any accessibility issues with Rutgers websites to accessibility@rutgers.edu or complete the Report Accessibility Barrier / Provide Feedback Form.