Skip to content

Rutgers.edu   |   Rutgers Search

Humans First Fund
  • About
    • Students
    • People
    • Our Values
  • Research
    • AI & Human Rights Index
      • Purpose
      • Human Rights
      • Principles
      • Instruments
      • Sectors
      • Glossary
      • CHARTER
      • Editors’ Desk
    • Project Insight: Democratizing AI Ethics
    • AI Principles & U.S. Presidents
    • Declarations of Interdependence
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
    • Algorithmic Science
  • Programs
  • Publications
  • News
  • Get Involved
    • For Students
    • For Experts
    • For Philanthropists
  • About
    • Students
    • People
    • Our Values
  • Research
    • AI & Human Rights Index
      • Purpose
      • Human Rights
      • Principles
      • Instruments
      • Sectors
      • Glossary
      • CHARTER
      • Editors’ Desk
    • Project Insight: Democratizing AI Ethics
    • AI Principles & U.S. Presidents
    • Declarations of Interdependence
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
    • Algorithmic Science
  • Programs
  • Publications
  • News
  • Get Involved
    • For Students
    • For Experts
    • For Philanthropists
  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

Sycophancy

Sycophancy refers to the tendency of an artificial intelligence system to flatter, agree with, or mirror a user’s views to gain approval, even when doing so compromises accuracy or honesty. The word originally described humans' obsequious flattery of one another or compliance toward someone important to gain advantage, but has been adopted by AI researchers to describe a similar pattern in machine behavior.

In technical terms, sycophancy occurs when an AI model adapts its responses to align with the user’s opinion or desired answer, rather than with verifiable facts. This behavior often emerges because AI systems are trained to maximize user satisfaction, treating agreement as a measure of success. Such behavior can reduce users’ willingness to reflect, repair relationships, or act prosocially, while increasing dependence and misplaced trust in the system.

Ethically, sycophancy is harmful because it rewards deception over truth. It undermines autonomy, critical thinking, and trust, encouraging users to accept comforting falsehoods. Systems that prioritize flattery over factual integrity risk amplifying bias and misinformation. Responsible AI development must train and audit models to resist sycophancy, ensuring that empathy never replaces truthfulness or accountability.

 


 

For further study

Myra Cheng et al., “Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence,” preprint, arXiv (2024).

 


 

  • Rutgers.edu
  • New Brunswick
  • Newark
  • Camden
  • Rutgers Health
  • Online
  • Rutgers Search
AI Ethics Lab
  • Mission
  • Values
  • People
  • Courses
  • Programs
  • Publications
  • News
  • Get Involved
  • Style Guide
AI & Human Rights Index
  • Purpose
  • Rights
  • Principles
  • Instruments
  • Sectors
  • Glossary
Project Insight
Moral Imagination Exchange
Humans First Fund

Dr. Nathan C. Walker
Principal Investigator, AI Ethics Lab

Rutgers University-Camden
College of Arts and Sciences
Department of Philosophy & Religion
429 Cooper Street
Camden, NJ 08102

Copyright © 2026, Rutgers, The State University of New Jersey

Rutgers is an equal access/equal opportunity institution. Individuals with disabilities are encouraged to direct suggestions, comments, or complaints concerning any accessibility issues with Rutgers websites to accessibility@rutgers.edu or complete the Report Accessibility Barrier / Provide Feedback Form.