Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
[Insert statement of urgency and significance for why this right relates to AI.]
Sectors #
The contributors of the AI & Human Rights Index have identified the following sectors as responsible for both using AI to protect and advance this human right.
- COM: Media and CommunicationThe Media and Communication sector encompasses organizations, platforms, and individuals involved in the creation, dissemination, and exchange of information and content. This includes content creators, arts and entertainment entities, news and media organizations, publishing and recording media, publishing industries, social media platforms, and telecommunications companies. The COM sector plays a crucial role in shaping public discourse, informing societies, and fostering connectivity, thereby influencing cultural, social, and political landscapes.
COM-CRT: Content Creators
Content Creators are individuals or groups who produce original content across various mediums, including writing, audio, video, and digital formats. They contribute to the diversity of information and entertainment available to the public. These creators are accountable for using AI ethically in content creation and distribution. This involves ensuring that AI tools do not infringe on intellectual property rights, propagate misinformation, or perpetuate biases and stereotypes. By integrating ethical AI practices, content creators can enhance creativity and reach while maintaining integrity and respecting audience rights. Examples include using AI for editing and enhancing content, such as automated video editing software, while ensuring that the final product is original and respects copyright laws. Employing AI analytics to understand audience engagement and tailor content without manipulating or exploiting user data.COM-ENT: Arts and Entertainment
The Arts and Entertainment sector includes organizations and individuals involved in producing and distributing artistic and entertainment content, such as films, music, theater, and performances. This sector significantly influences culture and societal values. These entities are accountable for using AI ethically in content production, distribution, and marketing. They must prevent the misuse of AI in creating deepfakes, unauthorized use of individuals' likenesses, or generating content that spreads harmful stereotypes. Ethical AI use can enhance production efficiency and audience engagement while promoting responsible content. Examples include implementing AI for special effects in films that respect performers' rights and obtain necessary consents. Using AI algorithms for content recommendations that promote diversity and avoid reinforcing biases or creating echo chambers.COM-NMO: News and Media Organizations
News and Media Organizations are entities that gather, produce, and distribute news and information to the public through various channels, including print, broadcast, and digital media. They play a critical role in informing the public and shaping public opinion. These organizations are accountable for using AI ethically in news gathering, content curation, and dissemination. This includes preventing the spread of misinformation, ensuring fairness and accuracy, and avoiding biases in AI-driven news algorithms. They must also respect privacy rights in data collection and protect journalistic integrity. Examples include using AI to automate fact-checking processes, enhancing the accuracy of reporting. Implementing AI algorithms for personalized news feeds that provide balanced perspectives and avoid creating filter bubbles.COM-PRM: Publishing and Recording Media
Publishing and Recording Media entities are involved in producing and distributing written, audio, and visual content, including books, music recordings, podcasts, and other media formats. They support artists and authors in reaching audiences. These entities are accountable for using AI ethically in content production, distribution, and rights management. They must respect intellectual property rights, ensure fair compensation for creators, and prevent unauthorized reproduction or distribution facilitated by AI. Examples include employing AI to convert books into audiobooks using synthetic voices, ensuring that proper licenses and consents are obtained. Using AI to detect and prevent piracy or unauthorized sharing of digital content.COM-PUB: Publishing Industries
The Publishing Industries focus on producing and disseminating literature, academic works, and informational content across various platforms. They contribute to education, culture, and the preservation of knowledge. These industries are accountable for using AI ethically in editing, production, and distribution processes. They must prevent biases in AI tools used for content selection or editing that could marginalize certain voices or perspectives. They should also respect authors' rights and ensure that AI does not infringe on intellectual property. Examples include using AI for manuscript editing and proofreading, enhancing efficiency while ensuring that the author's voice and intent are preserved. Implementing AI to recommend books to readers, promoting a diverse range of authors and topics.COM-SMP: Social Media Platforms
Social Media Platforms are online services that enable users to create and share content or participate in social networking. They have a significant impact on communication, information dissemination, and social interaction. These platforms are accountable for using AI ethically in content moderation, recommendation algorithms, and advertising. They must prevent the spread of misinformation, hate speech, and harmful content, protect user data, and avoid algorithmic biases that could lead to echo chambers or discrimination. Examples include using AI to detect and remove harmful content such as harassment or incitement to violence while respecting freedom of expression. Implementing transparent algorithms that provide diverse content and prevent the reinforcement of biases.COM-TEL: Telecommunications Companies
Telecommunications Companies provide communication services such as telephone, internet, and data transmission. They build and maintain the infrastructure that enables connectivity and digital communication globally. These companies are accountable for using AI ethically to manage networks, improve services, and protect user data. They must ensure that AI applications do not infringe on privacy rights, enable unlawful surveillance, or discriminate against certain users. Examples include employing AI to optimize network performance, enhancing service quality without accessing or exploiting user communications. Using AI-driven security measures to protect networks from cyber threats while respecting legal obligations regarding data privacy.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights in media and communication. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to enhance information dissemination, foster connectivity, and enrich cultural experiences while safeguarding individual rights, promoting diversity, and ensuring accurate and fair communication.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you. - ART: Arts and CultureThe Arts and Culture sector encompasses organizations, institutions, and individuals involved in the creation, preservation, and promotion of artistic and cultural expressions. This includes content creators, the entertainment industry, historical documentation centers, cultural institutions, museums, and arts organizations. The ART sector plays a vital role in enriching societies, fostering creativity, preserving heritage, and promoting cultural diversity and understanding.
ART-CRT: Content Creators
Content Creators are individuals or groups who produce artistic or cultural works, including visual artists, musicians, writers, filmmakers, and digital creators. They contribute to the cultural landscape by expressing ideas, emotions, and narratives through various mediums. These creators are accountable for using AI ethically in their creative processes and in how they distribute and monetize their work. This involves respecting intellectual property rights, avoiding plagiarism facilitated by AI, and ensuring that AI-generated content does not perpetuate stereotypes or infringe on cultural sensitivities. By integrating ethical AI practices, content creators can enhance their creativity while upholding artistic integrity and cultural respect. Examples include using AI tools for music composition or visual art creation as a means of inspiration, while ensuring the final work is original and not infringing on others' rights. Employing AI to analyze audience engagement data to tailor content that resonates with diverse audiences without compromising artistic vision or reinforcing harmful biases.ART-ENT: Entertainment Industry
The Entertainment Industry comprises companies and professionals involved in the production, distribution, and promotion of entertainment content, such as films, television shows, music, and live performances. This industry significantly influences culture and public opinion. These entities are accountable for using AI ethically in content creation, marketing, and distribution. They must prevent the use of AI in ways that could lead to deepfakes, unauthorized use of likenesses, or manipulation of audiences. Ethical AI use can enhance production efficiency and audience engagement while protecting individual rights and promoting responsible content. Examples include implementing AI for special effects in films that respect performers' rights and obtain necessary consents. Using AI algorithms for content recommendations that promote diversity and avoid creating echo chambers or reinforcing stereotypes.ART-HDC: Historical Documentation Centers
Historical Documentation Centers are institutions that collect, preserve, and provide access to historical records, archives, and artifacts. They play a crucial role in safeguarding cultural heritage and supporting research. These centers are accountable for using AI ethically to digitize and manage collections while respecting the provenance of artifacts and the rights of communities connected to them. They must ensure that AI does not misrepresent historical information or contribute to cultural appropriation. Examples include employing AI for digitizing and cataloging archives, making them more accessible to the public and researchers while ensuring accurate representation. Using AI to restore or reconstruct historical artifacts or documents, respecting the original context and cultural significance.ART-INS: Cultural Institutions
Cultural Institutions include organizations such as libraries, theaters, cultural centers, and galleries that promote cultural activities and education. They foster community engagement and cultural appreciation. These institutions are accountable for using AI ethically to enhance visitor experiences, manage collections, and promote inclusivity. They must prevent biases in AI applications that could exclude or misrepresent certain cultures or communities. Examples include implementing AI-powered interactive exhibits that engage visitors of all backgrounds. Using AI analytics to understand visitor demographics and preferences, informing programming that is inclusive and representative of diverse cultures.ART-MUS: Museums
Museums are institutions that collect, preserve, and exhibit artifacts of artistic, cultural, historical, or scientific significance. They educate the public and contribute to cultural preservation. Museums are accountable for using AI ethically in curation, exhibition design, and visitor engagement. This includes respecting the cultural heritage of artifacts, obtaining proper consents for use, and ensuring that AI does not distort interpretations. Examples include using AI to create virtual reality experiences that allow visitors to explore exhibits remotely, expanding access while ensuring accurate representation. Employing AI for artifact preservation techniques, such as predicting degradation and optimizing conservation efforts.ART-ORG: Arts Organizations
Arts Organizations are groups that support artists and promote the arts through funding, advocacy, education, and community programs. They play a key role in fostering artistic expression and cultural development. These organizations are accountable for using AI ethically to support artists and audiences equitably. They must ensure that AI tools do not introduce biases in grant allocations, program selections, or audience targeting. Examples include utilizing AI to analyze grant applications objectively, ensuring fair consideration for artists from diverse backgrounds. Implementing AI-driven marketing strategies that reach wider audiences without infringing on privacy or perpetuating stereotypes.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights in arts and culture. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to enhance creativity, preserve cultural heritage, promote diversity, and ensure that artistic expressions respect the rights and dignity of all individuals and communities.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you. - LAW: Legal and Law EnforcementThe Legal and Law Enforcement sector encompasses institutions and organizations responsible for upholding the law, ensuring justice, and maintaining public safety. This includes correctional facilities, law enforcement agencies, government surveillance programs, immigration and border control, judicial systems, legal tech companies, and private security firms. The LAW sector plays a critical role in protecting citizens' rights, enforcing laws, administering justice, and preserving social order.
LAW-COR: Correctional Facilities
Correctional Facilities include prisons, jails, and rehabilitation centers where individuals convicted of crimes serve their sentences. They aim to protect society, punish wrongdoing, and rehabilitate offenders for reintegration into the community. These facilities are accountable for ensuring that AI is used ethically to improve safety, rehabilitation, and operational efficiency without violating inmates' rights. This involves respecting privacy, preventing discriminatory practices, and promoting humane treatment. Ethical AI use can enhance rehabilitation efforts and support inmates' rights. Examples include using AI to assess inmates' needs and tailor rehabilitation programs accordingly, ensuring fair opportunities for all individuals. Implementing AI-powered monitoring systems to prevent violence or self-harm, while ensuring that surveillance respects privacy and is not overly intrusive.LAW-ENF: Law Enforcement
Law Enforcement agencies include police departments, federal investigative bodies, and other entities responsible for enforcing laws, preventing crime, and protecting citizens. They maintain public order and safety through various means, including patrols, investigations, and community engagement. The LAW-ENF sector is accountable for using AI ethically in policing activities. This includes preventing biases in AI systems used for predictive policing, facial recognition, or resource allocation. They must protect citizens' rights to privacy, due process, and equal treatment under the law. Examples include employing AI analytics to identify crime patterns and allocate resources effectively without targeting specific communities unfairly. Using AI-powered tools to assist in investigations while ensuring that data collection and analysis comply with legal standards and respect individual rights.LAW-GSP: Government Surveillance Programs
Government Surveillance Programs involve monitoring and collecting data by government agencies to enhance national security and public safety. They use technologies, including AI, to detect and prevent criminal activities, terrorism, and other threats to society. This sector is accountable for ensuring that AI is used ethically in surveillance programs. They must balance security objectives with the protection of individual freedoms, adhering to legal frameworks and human rights standards to prevent unlawful surveillance and violations of privacy rights. Examples include implementing AI systems that anonymize personal data to prevent profiling and discrimination while identifying potential security threats. Establishing oversight committees to monitor AI surveillance tools, ensuring compliance with privacy laws and civil liberties.LAW-IMM: Immigration and Border Control
Immigration and Border Control agencies manage the movement of people across national borders. They enforce immigration laws, process visas and asylum applications, and protect against illegal entry and trafficking. These agencies are accountable for using AI ethically to facilitate lawful immigration and enhance border security while respecting human rights. This includes preventing discriminatory practices, ensuring fair treatment of all individuals, and protecting sensitive personal information. Examples include using AI to streamline visa application processes, making them more efficient and accessible while safeguarding applicants' data. Implementing AI systems for risk assessment at borders that are free from biases and do not discriminate based on nationality, ethnicity, or religion.LAW-JUD: Judicial Systems
Judicial Systems comprise courts and related institutions responsible for interpreting laws, adjudicating disputes, and administering justice. They ensure that legal proceedings are fair, impartial, and follow due process. The LAW-JUD sector is accountable for ensuring that AI is used ethically in judicial processes. This involves using AI to enhance efficiency and access to justice while preventing biases in decision-making algorithms. They must maintain transparency and uphold the principles of fairness and equality before the law. Examples include employing AI for case management to reduce backlogs and expedite proceedings without compromising the quality of justice. Using AI tools to assist in legal research, providing judges and lawyers with comprehensive information while ensuring that recommendations do not introduce bias into judgments.LAW-LTC: Legal Tech Companies
Legal Tech Companies develop technology solutions for the legal industry, including software for case management, document automation, legal research, and AI-powered analytics. These companies are accountable for designing AI tools that support the legal profession ethically. They must ensure that their products do not perpetuate biases, compromise client confidentiality, or undermine the integrity of legal processes. Examples include creating AI-driven legal research platforms that provide unbiased and comprehensive results, aiding lawyers in building fair cases. Developing AI tools for contract analysis that protect sensitive information and adhere to data privacy regulations.LAW-SEC: Private Security Firms
Private Security Firms offer security services to individuals, businesses, and organizations. Their services include guarding property, personal protection, surveillance, and risk assessment. The LAW-SEC sector is accountable for using AI ethically to enhance security services without infringing on individuals' rights. This involves respecting privacy, avoiding discriminatory practices, and ensuring transparency in surveillance activities. Examples include implementing AI-powered surveillance systems that detect potential security threats while anonymizing data to protect privacy. Using AI for access control systems that verify identities without storing excessive personal information or discriminating against certain groups.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights within legal and law enforcement contexts. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to uphold justice, protect citizens, and ensure that the enforcement of laws does not come at the expense of individual freedoms and dignity.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you. - REG: Regulatory and Oversight BodiesThe Regulatory and Oversight Bodies sector encompasses organizations responsible for creating, implementing, and enforcing regulations, as well as monitoring compliance across various industries. This includes regulatory agencies, data protection authorities, ethics committees, oversight bodies, and other regulatory entities. The REG sector plays a critical role in ensuring that laws and standards are upheld, protecting public interests, promoting fair practices, and safeguarding human rights in the context of technological advancements like artificial intelligence (AI).
REG-AGY: Regulatory Agencies
Regulatory Agencies are government-appointed bodies tasked with creating and enforcing rules and regulations within specific industries or sectors. They oversee compliance with laws, issue licenses, conduct inspections, and take enforcement actions when necessary. These agencies are accountable for ensuring that AI technologies within their jurisdictions are developed and used ethically and responsibly. This involves setting standards for AI deployment, preventing abuses, and promoting practices that advance human rights. By regulating AI effectively, they help prevent harm and foster public trust in technological innovations. Examples include establishing guidelines for AI transparency and accountability in industries like finance or healthcare, ensuring that AI systems do not discriminate or violate privacy rights. Enforcing regulations that require companies to conduct human rights impact assessments before deploying AI technologies.REG-DPA: Data Protection Authorities
Data Protection Authorities are specialized regulatory bodies responsible for overseeing the implementation of data protection laws and safeguarding individuals' personal information. They monitor compliance, handle complaints, and have the power to enforce penalties for violations. These authorities are accountable for ensuring that AI systems handling personal data comply with data protection principles such as lawfulness, fairness, transparency, and data minimization. They play a crucial role in preventing privacy infringements and promoting the ethical use of AI in processing personal information. Examples include reviewing and approving AI data processing activities to ensure they meet legal requirements. Investigating breaches involving AI systems and imposing sanctions on organizations that misuse personal data or fail to protect it adequately.REG-ETH: Ethics Committees
Ethics Committees are groups of experts who evaluate the ethical implications of policies, research projects, or technological developments. They provide guidance, assess compliance with ethical standards, and make recommendations to ensure responsible conduct. These committees are accountable for scrutinizing AI initiatives to identify potential ethical issues, such as biases, unfair treatment, or risks to human dignity. By promoting ethical considerations in AI development and deployment, they help prevent human rights abuses and encourage technologies that benefit society. Examples include reviewing AI research proposals to ensure they respect participants' rights and obtain informed consent. Providing guidance on ethical AI practices for organizations, helping them integrate ethical principles into their AI strategies and operations.REG-OVS: Oversight Bodies
Oversight Bodies are organizations or committees tasked with monitoring and evaluating the activities of institutions, agencies, or specific sectors to ensure accountability and compliance with laws and regulations. They may be independent or part of a governmental framework. These bodies are accountable for overseeing the use of AI across various domains, ensuring that organizations adhere to legal and ethical standards. They help detect and address potential abuses, promoting transparency and fostering public confidence in AI technologies. Examples include auditing government agencies' use of AI to verify compliance with human rights obligations and data protection laws. Recommending corrective actions or policy changes when AI applications are found to have negative impacts on individuals or communities.REG-RBY: Regulatory Bodies
Regulatory Bodies are official organizations that establish and enforce rules within specific professional fields or industries. They set standards, issue certifications, and may discipline members who do not comply with established norms. These bodies are accountable for incorporating AI considerations into their regulatory frameworks, ensuring that professionals using AI adhere to ethical guidelines and best practices. They play a key role in preventing malpractice and promoting the responsible use of AI. Examples include a medical board setting standards for AI-assisted diagnostics, ensuring that healthcare providers use AI tools that are safe, effective, and respect patient rights. A legal bar association providing guidelines on AI use in legal practice to prevent biases and maintain client confidentiality.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights. Their accountability lies in the responsible development, enforcement, and oversight of regulations and standards governing AI technologies. Through diligent regulation and monitoring, they ensure that AI is used to benefit society while safeguarding individual rights and upholding public trust.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you. - TECH: Technology and ITThe Technology and IT sector encompasses companies and organizations involved in the development, production, and maintenance of technology products and services. This includes technology companies, cybersecurity firms, digital platforms, educational technology companies, healthcare technology companies, legal tech companies, smart home device manufacturers, social media platforms, and telecommunications companies. The TECH sector plays a pivotal role in driving innovation, connecting people globally, and shaping how societies operate in the digital age.
TECH-COM: Technology Companies
Technology Companies are businesses that develop and sell technology products or services, such as software developers, hardware manufacturers, and IT service providers. They are at the forefront of technological advancements and influence various aspects of modern life. These companies are accountable for ensuring that AI is developed and deployed ethically, promoting transparency, fairness, and accountability. They must prevent biases in AI algorithms, protect user data, and consider the societal impact of their technologies. By integrating ethical AI practices, they can foster trust and contribute positively to society. Examples include developing AI applications that respect user privacy by minimizing data collection and implementing strong security measures. Creating AI systems that are transparent and explainable, allowing users to understand how decisions are made and challenging them if necessary.TECH-CSF: Cybersecurity Firms
Cybersecurity Firms specialize in protecting computer systems, networks, and data from digital attacks, unauthorized access, or damage. They offer services like threat detection, vulnerability assessments, and incident response. These firms are accountable for using AI ethically to enhance cybersecurity while respecting privacy and legal boundaries. They must ensure that AI tools do not infringe on users' rights or engage in unauthorized surveillance. Ethical AI use can strengthen defenses against cyber threats without compromising individual freedoms. Examples include employing AI to detect and respond to cyber threats in real-time, protecting organizations and users from harm while ensuring that monitoring activities comply with privacy laws. Providing AI-driven security solutions that help organizations safeguard data without accessing or misusing sensitive information.TECH-DGP: Digital Platforms
Digital Platforms are online businesses that facilitate interactions between users, such as e-commerce sites, content sharing services, and marketplaces. They connect buyers and sellers, content creators and consumers, and enable various online activities. These platforms are accountable for using AI ethically to manage content, personalize user experiences, and ensure safe interactions. This involves preventing algorithmic biases, protecting user data, and avoiding practices that could lead to discrimination or exploitation. Examples include using AI to recommend content or products in a way that promotes diversity and avoids reinforcing harmful stereotypes. Implementing AI moderation tools to detect and remove inappropriate or illegal content while respecting freedom of expression and avoiding censorship of legitimate speech.TECH-EDU: Educational Technology Companies
Educational Technology Companies develop tools and platforms that support teaching and learning processes. They create software, applications, and devices used in educational settings, from K-12 schools to higher education and corporate training. These companies are accountable for designing AI-powered educational tools that are accessible, inclusive, and respect students' privacy. They must prevent biases that could disadvantage certain learners and ensure that data collected is used responsibly. Examples include creating AI-driven personalized learning systems that adapt to individual students' needs without compromising their privacy. Developing educational platforms that are accessible to students with disabilities, adhering to universal design principles.TECH-HTC: Healthcare Technology Companies
Healthcare Technology Companies focus on developing technological solutions for the healthcare industry. They innovate in areas like electronic health records, telemedicine, medical imaging, and AI-driven diagnostics. These companies are accountable for ensuring that their AI technologies are safe, effective, and respect patient rights. They must obtain necessary regulatory approvals, protect patient data, and prevent biases in AI models that could lead to misdiagnosis. Examples include developing AI algorithms for medical imaging analysis that are trained on diverse datasets to provide accurate results across different populations. Implementing telehealth platforms that securely handle patient information and comply with healthcare privacy regulations.TECH-LTC: Legal Tech Companies
Legal Tech Companies provide technology solutions for legal professionals and organizations. They develop software for case management, document automation, legal research, and AI-powered analytics. These companies are accountable for creating AI tools that enhance the legal profession ethically. They must ensure their products do not perpetuate biases, maintain client confidentiality, and support the integrity of legal processes. Examples include offering AI-driven legal research platforms that provide unbiased results, helping lawyers build fair cases. Designing contract analysis tools that protect sensitive information and comply with data protection laws.TECH-SHD: Smart Home Device Manufacturers
Smart Home Device Manufacturers produce internet-connected devices used in homes, such as smart thermostats, security systems, voice assistants, and appliances. These devices often utilize AI to provide enhanced functionality and user convenience. These manufacturers are accountable for ensuring that their devices respect user privacy, are secure from unauthorized access, and do not collect excessive personal data. They must be transparent about data usage and provide users with control over their information. Examples include designing smart devices that operate effectively without constantly transmitting data to external servers, minimizing privacy risks. Implementing robust security measures to protect devices from hacking or misuse.TECH-SMP: Social Media Platforms
Social Media Platforms are online services that enable users to create and share content or participate in social networking. They play a significant role in information dissemination, communication, and shaping public discourse. These platforms are accountable for using AI ethically in content moderation, recommendation algorithms, and advertising. They must prevent the spread of misinformation, protect user data, and avoid algorithmic biases that could lead to echo chambers or discrimination. Examples include using AI to detect and remove harmful content such as hate speech or incitement to violence while respecting freedom of expression. Implementing transparent algorithms that provide diverse perspectives and prevent the reinforcement of biases.TECH-TEL: Telecommunications Companies
Telecommunications Companies provide communication services such as telephone, internet, and data transmission. They build and maintain the infrastructure that enables connectivity and digital communication globally. These companies are accountable for using AI ethically to manage networks, improve services, and protect user data. They must ensure that AI applications do not infringe on privacy rights or enable unlawful surveillance. Examples include employing AI to optimize network performance, enhancing service quality without accessing or exploiting user communications. Using AI-driven security measures to protect networks from cyber threats while respecting legal obligations regarding data privacy.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights in the technology and IT domain. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to drive innovation while safeguarding individual rights, promoting fairness, and building public trust in technological advancements.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
AI’s Potential Violations #
[Insert 300- to 500-word analysis of how AI could violate this human right.]
AI’s Potential Benefits #
[Insert 300- to 500-word analysis of how AI could advance this human right.]
Human Rights Instruments #
Universal Declaration of Human Rights (1948) #
G.A. Res. 217 (III) A, Universal Declaration of Human Rights, U.N. Doc. A/RES/217(III) (Dec. 10, 1948)
Article 18
Everyone has the right to Freedom
of thought, conscience and religion; this right includes FreedomFreedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.to change his religion or belief, and Freedom
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms., either alone or in community with others and in public or private, to manifest his religion or belief in teaching, practice, worship and observance.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
American Declaration of the Rights and Duties of Man (1948) #
[Insert Blubook Format Citation]
Article 3
Every person has the right freely to profess a religious faith, and to manifest and practice it both in public and in private.
European Convention on the Protection of Human Rights (1950) #
[Insert Bluebook Format Citation]
Article 9
- Everyone has the right to Freedom
Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.of thought, conscience and religion; this right includes Freedom
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.to change his religion or belief and Freedom
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms., either alone or in community with others and
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
in public or private, to manifest his religion or belief, in worship, teaching, practice and observance.- Freedom
Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.to manifest one’s religion or beliefs shall be subject only to such limitations as are prescribed by law and are necessary in a democratic society in the interests of public Safety
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Safety in artificial intelligence (AI) refers to ensuring that AI systems function reliably and as intended, without causing harm to individuals, society, or the environment. Spanning the entire AI lifecycle—from design and development to deployment and operation—safety emphasizes proactive risk management to prevent malfunctions, misuse, or harmful outcomes. By prioritizing safety, developers can foster public trust and confidence in AI technologies, particularly in critical domains like healthcare, autonomous transportation, and public infrastructure. Ensuring AI safety involves key measures such as pre-deployment testing, continuous monitoring, and robust risk assessment frameworks. Developers must evaluate both anticipated and unforeseen risks, ensuring that AI systems behave predictably, even in novel or challenging scenarios. For example, machine learning systems that adapt to new data require ongoing scrutiny to prevent harmful or unintended behaviors. Embedding safety into the design process includes integrating safeguards like fail-safe mechanisms, fallback protocols, and human oversight to address vulnerabilities and align AI systems with societal values. However, achieving AI safety presents significant challenges. Advanced AI systems, particularly those using machine learning or neural networks, can exhibit unpredictable behaviors or face unforeseen applications. Additionally, the rapid pace of AI innovation often outstrips the development of safety regulations and standards. Addressing these challenges requires coordinated efforts among governments, private sector actors, and civil society to establish safety guidelines, enforce accountability, and promote public awareness. Collaborative approaches, such as developing international standards and sharing best practices, are essential for ensuring AI technologies serve humanity responsibly and safely. For Further Reading Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020.,
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
for the protection of public order, health or morals, or for the protection of the rights and freedoms of others.
International Covenant on Civil and Political Rights (1966) #
G.A. Res. 2200A (XXI), International Covenant on Civil and Political Rights, U.N. Doc. A/6316 (1966), 999 U.N.T.S. 171 (Dec. 16, 1966)
Article 18
- Everyone shall have the right to Freedom
Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.of thought, conscience and religion. This right shall include Freedom
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.to have or to adopt a religion or belief of his choice, and Freedom
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms., either individually or in community with others and in public or private, to manifest his religion or belief in worship, observance, practice and teaching.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.- No one shall be subject to coercion which would impair his Freedom
Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.to have or to adopt a religion or belief of his choice.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.- Freedom
Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.to manifest one’s religion or beliefs may be subject only to such limitations as are prescribed by law and are necessary to protect public Safety
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Safety in artificial intelligence (AI) refers to ensuring that AI systems function reliably and as intended, without causing harm to individuals, society, or the environment. Spanning the entire AI lifecycle—from design and development to deployment and operation—safety emphasizes proactive risk management to prevent malfunctions, misuse, or harmful outcomes. By prioritizing safety, developers can foster public trust and confidence in AI technologies, particularly in critical domains like healthcare, autonomous transportation, and public infrastructure. Ensuring AI safety involves key measures such as pre-deployment testing, continuous monitoring, and robust risk assessment frameworks. Developers must evaluate both anticipated and unforeseen risks, ensuring that AI systems behave predictably, even in novel or challenging scenarios. For example, machine learning systems that adapt to new data require ongoing scrutiny to prevent harmful or unintended behaviors. Embedding safety into the design process includes integrating safeguards like fail-safe mechanisms, fallback protocols, and human oversight to address vulnerabilities and align AI systems with societal values. However, achieving AI safety presents significant challenges. Advanced AI systems, particularly those using machine learning or neural networks, can exhibit unpredictable behaviors or face unforeseen applications. Additionally, the rapid pace of AI innovation often outstrips the development of safety regulations and standards. Addressing these challenges requires coordinated efforts among governments, private sector actors, and civil society to establish safety guidelines, enforce accountability, and promote public awareness. Collaborative approaches, such as developing international standards and sharing best practices, are essential for ensuring AI technologies serve humanity responsibly and safely. For Further Reading Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020., order, health, or morals or the fundamental rights and freedoms of others.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.- The States Parties to the present Covenant undertake to have respect for the liberty of parents and, when applicable, legal guardians to ensure the religious and moral education of their children in conformity with their own convictions.
American Convention on Human Rights (1969) #
[Insert Bluebook Format Citation]
Article 12
- Everyone has the right to Freedom
Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.of conscience and of religion. This right includes Freedom
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.to maintain or
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
is FreedomFreedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.to maintain or to change his religion or to change one’s religion or beliefs, and Freedom
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.to profess or disseminate one’s religion or beliefs, either individually or together with others, in public or in private.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.- No one shall be subject to restrictions that might impair his religion or beliefs
African Charter on Human Rights (1981) #
[Insert Bluebook Format Citation]
Article 8
Freedom
of conscience, the profession and free practice of religion shall be guaranteed. No one may, subject to law and order, be submitted to measures restricting the exercise of these freedoms.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
Declaration on the Elimination of All Forms of Intolerance and of Discrimination Based on Religion or Belief (1981) #
[Insert Bluebook Format Citation]
Article 1
1. Everyone shall have the right to Freedom
of thought, conscience and religion. This right shall include FreedomFreedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.to have a religion or whatever belief of his choice, and Freedom
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms., either individually or in community with others and in public or private, to manifest his religion or belief in worship, observance, practice and teaching.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.2. No one shall be subject to coercion which would impair his Freedom
to have a religion or belief of his choice.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.3. Freedom
to manifest one’s religion or belief may be subject only to such limitations as are prescribed by law and are necessary to protect public SafetyFreedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Safety in artificial intelligence (AI) refers to ensuring that AI systems function reliably and as intended, without causing harm to individuals, society, or the environment. Spanning the entire AI lifecycle—from design and development to deployment and operation—safety emphasizes proactive risk management to prevent malfunctions, misuse, or harmful outcomes. By prioritizing safety, developers can foster public trust and confidence in AI technologies, particularly in critical domains like healthcare, autonomous transportation, and public infrastructure. Ensuring AI safety involves key measures such as pre-deployment testing, continuous monitoring, and robust risk assessment frameworks. Developers must evaluate both anticipated and unforeseen risks, ensuring that AI systems behave predictably, even in novel or challenging scenarios. For example, machine learning systems that adapt to new data require ongoing scrutiny to prevent harmful or unintended behaviors. Embedding safety into the design process includes integrating safeguards like fail-safe mechanisms, fallback protocols, and human oversight to address vulnerabilities and align AI systems with societal values. However, achieving AI safety presents significant challenges. Advanced AI systems, particularly those using machine learning or neural networks, can exhibit unpredictable behaviors or face unforeseen applications. Additionally, the rapid pace of AI innovation often outstrips the development of safety regulations and standards. Addressing these challenges requires coordinated efforts among governments, private sector actors, and civil society to establish safety guidelines, enforce accountability, and promote public awareness. Collaborative approaches, such as developing international standards and sharing best practices, are essential for ensuring AI technologies serve humanity responsibly and safely. For Further Reading Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020., order, health or morals or the fundamental rights and freedoms of others.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
Convention on the Rights of the Child #
[Insert Bluebook Format Citation]
Article 14
1. States Parties shall respect the right of the child to Freedom
of thought, conscience and religion.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.2. States Parties shall respect the rights and duties of the parents and, when applicable, legal guardians, to provide direction to the child in the exercise of his or her right in a manner consistent with the evolving capacities of the child.
3. Freedom
to manifest one’s religion or beliefs may be subject only to such limitations as are prescribed by law and are necessary to protect public SafetyFreedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Safety in artificial intelligence (AI) refers to ensuring that AI systems function reliably and as intended, without causing harm to individuals, society, or the environment. Spanning the entire AI lifecycle—from design and development to deployment and operation—safety emphasizes proactive risk management to prevent malfunctions, misuse, or harmful outcomes. By prioritizing safety, developers can foster public trust and confidence in AI technologies, particularly in critical domains like healthcare, autonomous transportation, and public infrastructure. Ensuring AI safety involves key measures such as pre-deployment testing, continuous monitoring, and robust risk assessment frameworks. Developers must evaluate both anticipated and unforeseen risks, ensuring that AI systems behave predictably, even in novel or challenging scenarios. For example, machine learning systems that adapt to new data require ongoing scrutiny to prevent harmful or unintended behaviors. Embedding safety into the design process includes integrating safeguards like fail-safe mechanisms, fallback protocols, and human oversight to address vulnerabilities and align AI systems with societal values. However, achieving AI safety presents significant challenges. Advanced AI systems, particularly those using machine learning or neural networks, can exhibit unpredictable behaviors or face unforeseen applications. Additionally, the rapid pace of AI innovation often outstrips the development of safety regulations and standards. Addressing these challenges requires coordinated efforts among governments, private sector actors, and civil society to establish safety guidelines, enforce accountability, and promote public awareness. Collaborative approaches, such as developing international standards and sharing best practices, are essential for ensuring AI technologies serve humanity responsibly and safely. For Further Reading Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020., order, health or morals, or the fundamental rights and freedoms of others.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
Declaration on the Rights of Persons Belonging to National or Ethnic, Religious and Linguistic Minorities #
[Insert Bluebook Format Citation]
Article 2
- Persons belonging to national or ethnic, religious and linguistic minorities (hereinafter referred to as persons belonging to minorities) have the right to enjoy their own culture, to profess and practise their own religion, and to use their own language, in private and in public, freely and without interference or any form of discrimination.
Charter of Fundamental Rights of the European Union #
[Insert Bluebook Format Citation]
Article 10
- Everyone has the right to Freedom
Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.of thought, conscience and religion. This right includes Freedom
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
to change religion or belief and FreedomFreedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms., either alone or in community with others and in public or in
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
private, to manifest religion or belief, in worship, teaching, practice and observance.- The right to conscientious objection is recognised, in accordance with the national laws governing
the exercise of this right.
Last Updated: April 18, 2025
Research Assistant: Aarianna Aughtry
Contributor: Nathan C. Walker
Reviewer: To Be Determined
Editor: Alexander Kriebitz
Subject: Human Right
Edition: Edition 1.0 Research
Recommended Citation: "VIII.A. Freedom of Thought, Conscience, and Religion, Edition 1.0 Research." In AI & Human Rights Index, edited by Nathan C. Walker, Dirk Brand, Caitlin Corrigan, Georgina Curto Rex, Alexander Kriebitz, John Maldonado, Kanshukan Rajaratnam, and Tanya de Villiers-Botha. New York: All Tech is Human; Camden, NJ: AI Ethics Lab at Rutgers University, 2025. Accessed April 24, 2025. https://aiethicslab.rutgers.edu/Docs/viii-a-thought/.