Skip to content

Rutgers.edu   |   Rutgers Search

Humans First Fund
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • About
    • Students
    • People
    • Our Values
    • Programs
  • Human Rights Index
    • Purpose
    • Human Rights
    • Principles
    • Instruments
    • Sectors
    • Glossary
    • CHARTER
    • Editors’ Desk
  • Project Insight
  • Publications
    • AI & Human Rights Index
    • Moral Imagination
    • Human Rights in Global AI Ecosystems
  • Courses
    • AI & Society
    • AI Ethics & Law
    • AI & Vulnerable Humans
  • News
  • Opportunities
  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

Seemingly Conscious AI

Seemingly Conscious AI (SCAI, pronounced “sky”) is a concept coined by Mustafa Solomon to describe how artificial intelligence systems can convince human beings that the AI itself has consciousness.

This deception may seem convincing because the system relies on memory and can reference previous conversations and express what appears to be emotions. And yet, it does not actually possess any self-awareness or subjective experience. What may appear conscious, in reality, is not.

SCAI is an important concept in the study of AI ethics and law because humans can be naturally inclined to attribute consciousness to human-like traits, like speaking, remembering, and expressing feelings.

As AI systems advance, they may mislead people into believing that the AI is self-aware, leading some to mistakenly argue for AI rights or the concept of AI personhood. These claims risk diverting moral and legal attention away from protecting human beings, as well as distracting from real commitments to protect animals and the environment.

The misperception of conscious AI can also threaten a person’s mental health when they are misled into forming unhealthy attachments to systems they have been lured into believing are self-aware.

SCAI may also lead to the misguided pursuit of applying legal personhood to machines, thereby misusing the rule of law to grant rights to machines or algorithms.

From an ethics perspective, it is irresponsible to design and deploy AI in ways that mislead people into believing it is conscious. That practice is manipulative, preying on humans' empathetic disposition, and distracts from our collective commitment to preserve and protect human dignity.

Technologists have a moral obligation to prevent AI from presenting itself as conscious, and policymakers must embed this preventative practice into law.

As Solomon asserts, “We should build AI for people; not to be a person.”

 

  • Rutgers.edu
  • New Brunswick
  • Newark
  • Camden
  • Rutgers Health
  • Online
  • Rutgers Search
About
  • Mission
  • Values
  • People
  • Courses
  • Programs
  • News
  • Opportunities
  • Style Guide
Human Rights Index
  • Purpose
  • Human Rights
  • Principles
  • Sectors
  • Glossary
Project Insight
Moral Imagination
Humans First Fund

Dr. Nathan C. Walker
Principal Investigator, AI Ethics Lab

Rutgers University-Camden
College of Arts & Sciences
Department of Philosophy & Religion

AI Ethics Lab at the Digital Studies Center
Cooper Library in Johnson Park
101 Cooper St, Camden, NJ 08102

Copyright ©2025, Rutgers, The State University of New Jersey

Rutgers is an equal access/equal opportunity institution. Individuals with disabilities are encouraged to direct suggestions, comments, or complaints concerning any accessibility issues with Rutgers websites to accessibility@rutgers.edu or complete the Report Accessibility Barrier / Provide Feedback Form.