Personally Identifiable Information (PII) is data that can single out or distinguish a specific individual. It includes any details that, when collected or processed, enable someone to identify one person among many. Because Artificial Intelligence systems often rely on personal data to make automated decisions, the handling of PII raises serious concerns about data privacy, autonomy, and fairness. If used without proper consent or safeguards, PII can be misused in ways that harm individuals, such as invasive surveillance or biased outcomes. From an ethical standpoint, strict protections and transparent practices are necessary to honor each person’s right to control their own information. Legal implications also arise when PII is mishandled, as it can undermine human rights and erode public trust in AI. Upholding the integrity and security of PII is therefore fundamental to advancing accountable and equitable AI solutions.
Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
Last Updated: February 28, 2025
Research Assistant: Amisha Rastogi
Contributor: To Be Determined
Reviewer: To Be Determined
Editor: Georgina Curto Rex
Subjects: Human Right, Ethics
Recommended Citation: "Personally Identifiable Information (PII), Edition 1.0 Research." In AI & Human Rights Index, edited by Nathan C. Walker, Dirk Brand, Caitlin Corrigan, Georgina Curto Rex, Alexander Kriebitz, John Maldonado, Kanshukan Rajaratnam, and Tanya de Villiers-Botha. New York: All Tech is Human; Camden, NJ: AI Ethics Lab at Rutgers University, 2025. Accessed April 22, 2025. https://aiethicslab.rutgers.edu/glossary/personally-identifiable-information/.