Social scoring refers to systems that use artificial intelligence and data analysis to assign people or organizations a numerical score based on their behaviors, characteristics, or interactions. These scores can determine access to services, benefits, or opportunities, shaping how individuals are treated in society.
The significance of social scoring lies in its power to influence fundamental rights. By rewarding or punishing behavior, it pressures people to conform to certain norms, threatening autonomy and freedom of expression. It also raises serious privacy concerns, since personal data may be collected and analyzed without true consent. Worse still, social scoring systems can embed and amplify discrimination, as biased algorithms and flawed data often harm marginalized groups disproportionately. When the logic behind scores is hidden, the lack of transparency makes it nearly impossible to challenge unfair outcomes or hold institutions accountable.
From an ethical standpoint, social scoring is deeply troubling. Dividing people into categories of “worthiness” undermines equality, fairness, and human dignity. While some argue it could encourage responsible behavior, these claimed benefits rarely outweigh the risks to human rights. Systems that reduce people to numbers erode social trust and should be viewed with caution, if not outright rejection.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.