Personally Identifiable Information (PII) is data that can single out or distinguish a specific individual. It includes any details that, when collected or processed, enable someone to identify one person among many. Because Artificial Intelligence systems often rely on personal data to make automated decisions, the handling of PII raises serious concerns about data privacy, autonomy, and fairness. If used without proper consent or safeguards, PII can be misused in ways that harm individuals, such as invasive surveillance or biased outcomes. From an ethical standpoint, strict protections and transparent practices are necessary to honor each person’s right to control their own information. Legal implications also arise when PII is mishandled, as it can undermine human rights and erode public trust in AI. Upholding the integrity and security of PII is therefore fundamental to advancing accountable and equitable AI solutions.
Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.