Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
[Insert statement of urgency and significance for why this right relates to AI.]
Sectors #
The contributors of the AI & Human Rights Index have identified the following sectors as responsible for both using AI to protect and advance this human right.
- DEF: Defense and Military
- GOV: Government and Public Sector
- INTL: International Organizations and Relations
- LAW: Legal and Law Enforcement
- REG: Regulatory and Oversight Bodies
- WORK: Employment and Labor
AI’s Potential Violations #
[Insert 300- to 500-word analysis of how AI could violate the human rights to peacefully assemble and Freedom
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
AI’s Potential Benefits #
[Insert 300- to 500-word analysis of how AI could advance this human right.]
Human Rights Instruments #
Universal Declaration of Human Rights (1948) #
G.A. Res. 217 (III) A, Universal Declaration of Human Rights, U.N. Doc. A/RES/217(III) (Dec. 10, 1948)
Article 20
Everyone has the right to Freedom
of peaceful assembly and association.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.No one may be compelled to belong to an association.
International Covenant on Civil and Political Rights (1966) #
G.A. Res. 2200A (XXI), International Covenant on Civil and Political Rights, U.N. Doc. A/6316 (1966), 999 U.N.T.S. 171 (Dec. 16, 1966)
Article 21
The right of peaceful assembly shall be recognized. No restrictions may be placed on the exercise of this right other than those imposed in conformity with the law and which are necessary in a democratic society in the interests of national Security
or public SafetySecurity in artificial intelligence (AI) refers to the principle that AI systems must be designed to resist external threats and protect the integrity, confidentiality, and availability of data and functionality. Security ensures AI systems are safeguarded against unauthorized access, manipulation, or exploitation, maintaining trust and reliability in AI technologies. This principle is particularly critical in sensitive domains such as finance, healthcare, and critical infrastructure, where vulnerabilities can have far-reaching consequences. Effective AI security emphasizes proactive measures, such as testing system resilience, sharing information about cyber threats, and implementing robust data protection strategies. Techniques like anonymization, de-identification, and data aggregation reduce risks to personal and sensitive information. Security by design—embedding security measures at every stage of an AI system’s lifecycle—is a cornerstone of this principle. This includes deploying fallback mechanisms, secure software protocols, and continuous monitoring to detect and address potential threats. These measures not only protect AI systems but also foster trust among users and stakeholders by ensuring their safe and ethical operation. Challenges to achieving AI security include the increasing complexity of AI models, the sophistication of cyber threats, and the need to balance security with transparency and usability. As AI technologies often operate across borders, international cooperation is essential to establish and enforce global security standards. Collaborative efforts among governments, private sector actors, and civil society can create unified frameworks to address cross-border threats and ensure the ethical deployment of secure AI systems. Ultimately, the principle of security safeguards individual and organizational assets while upholding broader societal trust in AI. By prioritizing security in design, deployment, and governance, developers and policymakers can ensure AI technologies serve humanity responsibly and reliably. For Further Reading Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020.
Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Safety in artificial intelligence (AI) refers to ensuring that AI systems function reliably and as intended, without causing harm to individuals, society, or the environment. Spanning the entire AI lifecycle—from design and development to deployment and operation—safety emphasizes proactive risk management to prevent malfunctions, misuse, or harmful outcomes. By prioritizing safety, developers can foster public trust and confidence in AI technologies, particularly in critical domains like healthcare, autonomous transportation, and public infrastructure. Ensuring AI safety involves key measures such as pre-deployment testing, continuous monitoring, and robust risk assessment frameworks. Developers must evaluate both anticipated and unforeseen risks, ensuring that AI systems behave predictably, even in novel or challenging scenarios. For example, machine learning systems that adapt to new data require ongoing scrutiny to prevent harmful or unintended behaviors. Embedding safety into the design process includes integrating safeguards like fail-safe mechanisms, fallback protocols, and human oversight to address vulnerabilities and align AI systems with societal values. However, achieving AI safety presents significant challenges. Advanced AI systems, particularly those using machine learning or neural networks, can exhibit unpredictable behaviors or face unforeseen applications. Additionally, the rapid pace of AI innovation often outstrips the development of safety regulations and standards. Addressing these challenges requires coordinated efforts among governments, private sector actors, and civil society to establish safety guidelines, enforce accountability, and promote public awareness. Collaborative approaches, such as developing international standards and sharing best practices, are essential for ensuring AI technologies serve humanity responsibly and safely. For Further Reading Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020., public order (ordre public), the protection of public health or morals or the protection of the rights and freedoms of others.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
Article 22
1. Everyone shall have the right to Freedom
of association with others, including the right to form and join trade unions for the protection of his interests.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.2. No restrictions may be placed on the exercise of this right other than those which are prescribed by law and which are necessary in a democratic society in the interests of national Security
or public SafetySecurity in artificial intelligence (AI) refers to the principle that AI systems must be designed to resist external threats and protect the integrity, confidentiality, and availability of data and functionality. Security ensures AI systems are safeguarded against unauthorized access, manipulation, or exploitation, maintaining trust and reliability in AI technologies. This principle is particularly critical in sensitive domains such as finance, healthcare, and critical infrastructure, where vulnerabilities can have far-reaching consequences. Effective AI security emphasizes proactive measures, such as testing system resilience, sharing information about cyber threats, and implementing robust data protection strategies. Techniques like anonymization, de-identification, and data aggregation reduce risks to personal and sensitive information. Security by design—embedding security measures at every stage of an AI system’s lifecycle—is a cornerstone of this principle. This includes deploying fallback mechanisms, secure software protocols, and continuous monitoring to detect and address potential threats. These measures not only protect AI systems but also foster trust among users and stakeholders by ensuring their safe and ethical operation. Challenges to achieving AI security include the increasing complexity of AI models, the sophistication of cyber threats, and the need to balance security with transparency and usability. As AI technologies often operate across borders, international cooperation is essential to establish and enforce global security standards. Collaborative efforts among governments, private sector actors, and civil society can create unified frameworks to address cross-border threats and ensure the ethical deployment of secure AI systems. Ultimately, the principle of security safeguards individual and organizational assets while upholding broader societal trust in AI. By prioritizing security in design, deployment, and governance, developers and policymakers can ensure AI technologies serve humanity responsibly and reliably. For Further Reading Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020.
Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Safety in artificial intelligence (AI) refers to ensuring that AI systems function reliably and as intended, without causing harm to individuals, society, or the environment. Spanning the entire AI lifecycle—from design and development to deployment and operation—safety emphasizes proactive risk management to prevent malfunctions, misuse, or harmful outcomes. By prioritizing safety, developers can foster public trust and confidence in AI technologies, particularly in critical domains like healthcare, autonomous transportation, and public infrastructure. Ensuring AI safety involves key measures such as pre-deployment testing, continuous monitoring, and robust risk assessment frameworks. Developers must evaluate both anticipated and unforeseen risks, ensuring that AI systems behave predictably, even in novel or challenging scenarios. For example, machine learning systems that adapt to new data require ongoing scrutiny to prevent harmful or unintended behaviors. Embedding safety into the design process includes integrating safeguards like fail-safe mechanisms, fallback protocols, and human oversight to address vulnerabilities and align AI systems with societal values. However, achieving AI safety presents significant challenges. Advanced AI systems, particularly those using machine learning or neural networks, can exhibit unpredictable behaviors or face unforeseen applications. Additionally, the rapid pace of AI innovation often outstrips the development of safety regulations and standards. Addressing these challenges requires coordinated efforts among governments, private sector actors, and civil society to establish safety guidelines, enforce accountability, and promote public awareness. Collaborative approaches, such as developing international standards and sharing best practices, are essential for ensuring AI technologies serve humanity responsibly and safely. For Further Reading Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020., public order (ordre public), the protection of public health or morals or the protection of the rights and freedoms of others. This article shall not prevent the imposition of lawful restrictions on members of the armed forces and of the police in their exercise of this right.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.3. Nothing in this article shall authorize States Parties to the International Labour Organisation Convention of 1948 concerning Freedom
of Association and Protection of the Right to Organize to take legislative measures which would prejudice, or to apply the law in such a manner as to prejudice, the guarantees provided for in that Convention.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
Last Updated: April 4, 2025
Research Assistants: Aadith Muthukumar, Aarianna Aughtry
Contributor: To Be Determined
Reviewer: To Be Determined
Editor: Dirk Brand
Subject: Human Right
Edition: Edition 1.0 Research
Recommended Citation: "IX.A. Freedom of Assembly and Association, Edition 1.0 Research." In AI & Human Rights Index, edited by Nathan C. Walker, Dirk Brand, Caitlin Corrigan, Georgina Curto Rex, Alexander Kriebitz, John Maldonado, Kanshukan Rajaratnam, and Tanya de Villiers-Botha. New York: All Tech is Human; Camden, NJ: AI Ethics Lab at Rutgers University, 2025. Accessed April 29, 2025. https://aiethicslab.rutgers.edu/Docs/ix-a-assembly/.