Seemingly Conscious AI (SCAI, pronounced “sky”) is a concept coined by Mustafa Solomon to describe how artificial intelligence systems can convince human beings that the AI itself has consciousness.
This deception may seem convincing because the system relies on memory and can reference previous conversations and express what appears to be emotions. And yet, it does not actually possess any self-awareness or subjective experience. What may appear conscious, in reality, is not.
SCAI is an important concept in the study of AI ethics and law because humans can be naturally inclined to attribute consciousness to human-like traits, like speaking, remembering, and expressing feelings.
As AI systems advance, they may mislead people into believing that the AI is self-aware, leading some to mistakenly argue for AI rights or the concept of AI personhood. These claims risk diverting moral and legal attention away from protecting human beings, as well as distracting from real commitments to protect animals and the environment.
The misperception of conscious AI can also threaten a person’s mental health when they are misled into forming unhealthy attachments to systems they have been lured into believing are self-aware.
SCAI may also lead to the misguided pursuit of applying legal personhood to machines, thereby misusing the rule of law to grant rights to machines or algorithms.
From an ethics perspective, it is irresponsible to design and deploy AI in ways that mislead people into believing it is conscious. That practice is manipulative, preying on humans' empathetic disposition, and distracts from our collective commitment to preserve and protect human dignity.
Technologists have a moral obligation to prevent AI from presenting itself as conscious, and policymakers must embed this preventative practice into law.
As Solomon asserts, “We should build AI for people; not to be a person.”