Social sorting is the practice of categorizing people into groups based on personal traits, behaviors, or social status, often using surveillance and data-driven technologies. In the age of artificial intelligence, these categories are created and reinforced through algorithms that process large amounts of data about individuals, frequently without their knowledge or consent.
This process matters because it can shape people’s life chances in profound ways. When AI systems assign individuals to groups, those categories often determine who gets access to opportunities and who faces additional scrutiny. Such practices can reinforce stereotypes, magnify bias, and subject already vulnerable groups to unfair treatment. By embedding prejudice into digital systems, social sorting undermines equality and fairness, eroding human dignity and weakening trust in technology.
At its core, social sorting is not simply a technical practice but an ethical and human rights issue. It challenges values such as privacy, autonomy, and non-discrimination. Treating people as data points to be ranked or monitored is wrong when it strips away their individuality and places them at risk of systemic harm. For this reason, developers and regulators have an obligation to ensure that AI systems resist discriminatory sorting and instead promote fairness, accountability, and respect for human rights.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.