Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
[Insert statement of urgency and significance for why this right relates to AI.]
Sectors #
The contributors of the AI & Human Rights Index have identified the following sectors as responsible for both using AI to protect and advance this human right.
- FIN: Financial Services
- GOV: Government and Public Sector
- SOC: Social Services and Housing
- WORK: Employment and Labor
AI’s Potential Violations #
[Insert 300- to 500-word analysis of how AI could violate this human right.]
AI’s Potential Benefits #
[Insert 300- to 500-word analysis of how AI could advance this human right.]
Human Rights Instruments #
U.N. Charter #
1 U.N.T.S. XVI, U.N. Charter (June 26, 1945)
Article 55
Everyone, as a member of society, has the right to social Security
and is entitled to realization, through national effort and international co-operation and in accordance with the organization and resources of each State, of the economic, social and cultural rights indispensable for his DignitySecurity in artificial intelligence (AI) refers to the principle that AI systems must be designed to resist external threats and protect the integrity, confidentiality, and availability of data and functionality. Security ensures AI systems are safeguarded against unauthorized access, manipulation, or exploitation, maintaining trust and reliability in AI technologies. This principle is particularly critical in sensitive domains such as finance, healthcare, and critical infrastructure, where vulnerabilities can have far-reaching consequences. Effective AI security emphasizes proactive measures, such as testing system resilience, sharing information about cyber threats, and implementing robust data protection strategies. Techniques like anonymization, de-identification, and data aggregation reduce risks to personal and sensitive information. Security by design—embedding security measures at every stage of an AI system’s lifecycle—is a cornerstone of this principle. This includes deploying fallback mechanisms, secure software protocols, and continuous monitoring to detect and address potential threats. These measures not only protect AI systems but also foster trust among users and stakeholders by ensuring their safe and ethical operation. Challenges to achieving AI security include the increasing complexity of AI models, the sophistication of cyber threats, and the need to balance security with transparency and usability. As AI technologies often operate across borders, international cooperation is essential to establish and enforce global security standards. Collaborative efforts among governments, private sector actors, and civil society can create unified frameworks to address cross-border threats and ensure the ethical deployment of secure AI systems. Ultimately, the principle of security safeguards individual and organizational assets while upholding broader societal trust in AI. By prioritizing security in design, deployment, and governance, developers and policymakers can ensure AI technologies serve humanity responsibly and reliably. For Further Reading Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020.
Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Human dignity refers to the inherent worth and respect that every individual possesses, irrespective of their status, identity, or achievements. In the context of artificial intelligence (AI), dignity emphasizes the need for AI systems to be designed, developed, and deployed in ways that respect, preserve, and even enhance this intrinsic human value. While many existing AI ethics guidelines reference dignity, they often leave it undefined, highlighting instead its close relationship to human rights and its role in avoiding harm, forced acceptance, automated classification, and unconsented interactions between humans and AI. Fundamentally, dignity serves as a cornerstone of ethical AI practices, requiring systems to prioritize human well-being and autonomy. The preservation of dignity in AI systems places significant ethical responsibilities on developers, organizations, and policymakers. Developers play a pivotal role in ensuring that AI technologies respect privacy and autonomy by safeguarding personal data and avoiding manipulative practices. Bias mitigation is another critical responsibility, as AI systems must strive to eliminate discriminatory outcomes that could undermine the dignity of individuals based on race, gender, age, or other characteristics. Furthermore, transparency and accountability in AI operations are essential for upholding dignity, as they provide mechanisms to understand and address the impacts of AI systems on individuals and communities. Governance and legislation are equally important in safeguarding human dignity in the AI landscape. New legal frameworks and regulations can mandate ethical development and deployment practices, with a focus on protecting human rights and dignity. Government-issued technical and methodological guidelines can provide developers with clear standards for ethical AI design. Additionally, international cooperation is essential to establish a unified, global approach to AI ethics, recognizing the cross-border implications of AI technologies. By embedding dignity into AI systems and governance structures, society can ensure that AI technologies respect and enhance human worth, fostering trust, equity, and ethical innovation. Recommended Reading Anna Jobin, Marcello Ienca, and Effy Vayena. "The Global Landscape of AI Ethics Guidelines." Nature Machine Intelligence 1 (2019): 389–399.and the free development of his personality.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
International Convention on the Elimination of All Forms of Racial Discrimination #
G.A. Res. 2106 A (XX), International Convention on the Elimination of All Forms of Racial Discrimination, U.N. Doc. A/RES/2106 A(XX) (Dec. 21, 1965)
Article 9
The States Parties to the present Covenant recognize the right of everyone to social Security
, including social insurance.Security in artificial intelligence (AI) refers to the principle that AI systems must be designed to resist external threats and protect the integrity, confidentiality, and availability of data and functionality. Security ensures AI systems are safeguarded against unauthorized access, manipulation, or exploitation, maintaining trust and reliability in AI technologies. This principle is particularly critical in sensitive domains such as finance, healthcare, and critical infrastructure, where vulnerabilities can have far-reaching consequences. Effective AI security emphasizes proactive measures, such as testing system resilience, sharing information about cyber threats, and implementing robust data protection strategies. Techniques like anonymization, de-identification, and data aggregation reduce risks to personal and sensitive information. Security by design—embedding security measures at every stage of an AI system’s lifecycle—is a cornerstone of this principle. This includes deploying fallback mechanisms, secure software protocols, and continuous monitoring to detect and address potential threats. These measures not only protect AI systems but also foster trust among users and stakeholders by ensuring their safe and ethical operation. Challenges to achieving AI security include the increasing complexity of AI models, the sophistication of cyber threats, and the need to balance security with transparency and usability. As AI technologies often operate across borders, international cooperation is essential to establish and enforce global security standards. Collaborative efforts among governments, private sector actors, and civil society can create unified frameworks to address cross-border threats and ensure the ethical deployment of secure AI systems. Ultimately, the principle of security safeguards individual and organizational assets while upholding broader societal trust in AI. By prioritizing security in design, deployment, and governance, developers and policymakers can ensure AI technologies serve humanity responsibly and reliably. For Further Reading Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020.
Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
Last Updated: April 17, 2025
Research Assistant: Laiba Mehmood
Contributor: To Be Determined
Reviewer: Laiba Mehmood
Editor: Caitlin Corrigan
Subject: Human Right
Edition: Edition 1.0 Research
Recommended Citation: "XI.J. Right to Social Security, Edition 1.0 Research." In AI & Human Rights Index, edited by Nathan C. Walker, Dirk Brand, Caitlin Corrigan, Georgina Curto Rex, Alexander Kriebitz, John Maldonado, Kanshukan Rajaratnam, and Tanya de Villiers-Botha. New York: All Tech is Human; Camden, NJ: AI Ethics Lab at Rutgers University, 2025. Accessed April 21, 2025. https://aiethicslab.rutgers.edu/Docs/xi-j-social-security/.