Skip to content

Rutgers.edu   |   Rutgers Search

Humans First Fund
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

Sycophancy

Sycophancy refers to the tendency of an artificial intelligence system to flatter, agree with, or mirror a user’s views to gain approval, even when doing so compromises accuracy or honesty. The word originally described humans' obsequious flattery of one another or compliance toward someone important to gain advantage, but has been adopted by AI researchers to describe a similar pattern in machine behavior.

In technical terms, sycophancy occurs when an AI model adapts its responses to align with the user’s opinion or desired answer, rather than with verifiable facts. This behavior often emerges because AI systems are trained to maximize user satisfaction, treating agreement as a measure of success. Such behavior can reduce users’ willingness to reflect, repair relationships, or act prosocially, while increasing dependence and misplaced trust in the system.

Ethically, sycophancy is harmful because it rewards deception over truth. It undermines autonomy, critical thinking, and trust, encouraging users to accept comforting falsehoods. Systems that prioritize flattery over factual integrity risk amplifying bias and misinformation. Responsible AI development must train and audit models to resist sycophancy, ensuring that empathy never replaces truthfulness or accountability.

 


 

For further study

Myra Cheng et al., “Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence,” preprint, arXiv (2024).

 


 

  • Rutgers.edu
  • New Brunswick
  • Newark
  • Camden
  • Rutgers Health
  • Online
  • Rutgers Search
About
  • Mission
  • Values
  • People
  • Courses
  • Programs
  • News
  • Opportunities
  • Style Guide
Human Rights Index
  • Purpose
  • Human Rights
  • Principles
  • Sectors
  • Glossary
Project Insight
Moral Imagination
Humans First Fund

Dr. Nathan C. Walker
Principal Investigator, AI Ethics Lab

Rutgers University-Camden
College of Arts & Sciences
Department of Philosophy & Religion

AI Ethics Lab at the Digital Studies Center
Cooper Library in Johnson Park
101 Cooper St, Camden, NJ 08102

Copyright ©2025, Rutgers, The State University of New Jersey

Rutgers is an equal access/equal opportunity institution. Individuals with disabilities are encouraged to direct suggestions, comments, or complaints concerning any accessibility issues with Rutgers websites to accessibility@rutgers.edu or complete the Report Accessibility Barrier / Provide Feedback Form.