Epistemic qualia, also known as “feelings of knowing,” are felt experiences that shape how people justify their beliefs or doubts about the information they consume. These subjective, felt ways of knowing, such as feeling certain, confused, or convinced, are distinct from knowledge grounded in verified evidence or a justified belief. The term epistemic qualia is used here as a working concept to connect philosophical accounts of humans’ subjective experiences with how they form and reform beliefs in real-world contexts involving AI. The term gives us a lens through which to study the mismatch between how something feels and how reliable it is. Whether people understand and feel convinced by information, or are confused and question their observations, can depend on how they perceive it was produced.
How people decide what to trust and how they act on their beliefs can depend on what their instincts or emotions suggest about how to interpret that information. In the context of generative artificial intelligence, people may, on the one hand, interpret some outputs with greater confidence than warranted, regardless of whether AI systems possess genuine knowledge or awareness, and, on the other hand, reject high-quality information if they feel it was AI-generated. These felt experiences can create a trust gap between how they feel about the source of that information and its objective accuracy or reliability.
From an ethics and human rights perspective, we highlight this trust gap to show that even the most accurate AI-generated information does not automatically confer trustworthiness. Trust is earned, not a given. People can develop over-trust and under-trust in AI and the people who use AI to express knowledge. When the origin of knowledge production can influence a person’s sense of things without earning their trust, generative AI can risk undermining people’s judgment or ability to make autonomous decisions or meaningful consent. Protecting people’s agency as a fundamental human right, including the ability to form, revise, and act on one’s own judgment, as well as to give informed and meaningful consent about what to believe and how to act, requires recognizing that people’s “feelings of knowing” are not always consistent or reliable and should not be mistaken for truth or accuracy.
Ultimately, epistemic qualia teach us that opaque AI systems that lack accountability can distort, and in some cases exploit, how people come to believe and understand information. At the same time, people’s mistrust of AI-generated content can make them uncomfortable accepting otherwise verifiable information.
By grounding technology in human rights and ethics, we can reduce the harms caused by opaque or manipulative AI systems by making the processes of knowledge production more transparent and accountable. Doing so helps people calibrate their “feelings of knowing” against information that can be verified as trustworthy.
For Further Study:
- Qualia,” Stanford Encyclopedia of Philosophy. Revised edition. Accessed March 26, 2026
- Daniel C. Dennett “Quining Qualia.” In Consciousness in Modern Science, edited by A. Marcel and E. Bisiach, 42–77. Oxford: Oxford University Press, 1988.
- Tiffany Zhu, Iain Weissburg, Kexun Zhang, and William Yang Wang. “Human Bias in the Face of AI: Examining Human Judgment Against Text Labeled as AI Generated.” arXiv (2025).