Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
[Insert statement of urgency and significance for why this right relates to AI.]
Sectors #
The contributors of the AI & Human Rights Index have identified the following sectors as responsible for both using AI to protect and advance this human right.
- COM: Media and CommunicationThe Media and Communication sector encompasses organizations, platforms, and individuals involved in the creation, dissemination, and exchange of information and content. This includes content creators, arts and entertainment entities, news and media organizations, publishing and recording media, publishing industries, social media platforms, and telecommunications companies. The COM sector plays a crucial role in shaping public discourse, informing societies, and fostering connectivity, thereby influencing cultural, social, and political landscapes.
COM-CRT: Content Creators
Content Creators are individuals or groups who produce original content across various mediums, including writing, audio, video, and digital formats. They contribute to the diversity of information and entertainment available to the public. These creators are accountable for using AI ethically in content creation and distribution. This involves ensuring that AI tools do not infringe on intellectual property rights, propagate misinformation, or perpetuate biases and stereotypes. By integrating ethical AI practices, content creators can enhance creativity and reach while maintaining integrity and respecting audience rights. Examples include using AI for editing and enhancing content, such as automated video editing software, while ensuring that the final product is original and respects copyright laws. Employing AI analytics to understand audience engagement and tailor content without manipulating or exploiting user data.COM-ENT: Arts and Entertainment
The Arts and Entertainment sector includes organizations and individuals involved in producing and distributing artistic and entertainment content, such as films, music, theater, and performances. This sector significantly influences culture and societal values. These entities are accountable for using AI ethically in content production, distribution, and marketing. They must prevent the misuse of AI in creating deepfakes, unauthorized use of individuals' likenesses, or generating content that spreads harmful stereotypes. Ethical AI use can enhance production efficiency and audience engagement while promoting responsible content. Examples include implementing AI for special effects in films that respect performers' rights and obtain necessary consents. Using AI algorithms for content recommendations that promote diversity and avoid reinforcing biases or creating echo chambers.COM-NMO: News and Media Organizations
News and Media Organizations are entities that gather, produce, and distribute news and information to the public through various channels, including print, broadcast, and digital media. They play a critical role in informing the public and shaping public opinion. These organizations are accountable for using AI ethically in news gathering, content curation, and dissemination. This includes preventing the spread of misinformation, ensuring fairness and accuracy, and avoiding biases in AI-driven news algorithms. They must also respect privacy rights in data collection and protect journalistic integrity. Examples include using AI to automate fact-checking processes, enhancing the accuracy of reporting. Implementing AI algorithms for personalized news feeds that provide balanced perspectives and avoid creating filter bubbles.COM-PRM: Publishing and Recording Media
Publishing and Recording Media entities are involved in producing and distributing written, audio, and visual content, including books, music recordings, podcasts, and other media formats. They support artists and authors in reaching audiences. These entities are accountable for using AI ethically in content production, distribution, and rights management. They must respect intellectual property rights, ensure fair compensation for creators, and prevent unauthorized reproduction or distribution facilitated by AI. Examples include employing AI to convert books into audiobooks using synthetic voices, ensuring that proper licenses and consents are obtained. Using AI to detect and prevent piracy or unauthorized sharing of digital content.COM-PUB: Publishing Industries
The Publishing Industries focus on producing and disseminating literature, academic works, and informational content across various platforms. They contribute to education, culture, and the preservation of knowledge. These industries are accountable for using AI ethically in editing, production, and distribution processes. They must prevent biases in AI tools used for content selection or editing that could marginalize certain voices or perspectives. They should also respect authors' rights and ensure that AI does not infringe on intellectual property. Examples include using AI for manuscript editing and proofreading, enhancing efficiency while ensuring that the author's voice and intent are preserved. Implementing AI to recommend books to readers, promoting a diverse range of authors and topics.COM-SMP: Social Media Platforms
Social Media Platforms are online services that enable users to create and share content or participate in social networking. They have a significant impact on communication, information dissemination, and social interaction. These platforms are accountable for using AI ethically in content moderation, recommendation algorithms, and advertising. They must prevent the spread of misinformation, hate speech, and harmful content, protect user data, and avoid algorithmic biases that could lead to echo chambers or discrimination. Examples include using AI to detect and remove harmful content such as harassment or incitement to violence while respecting freedom of expression. Implementing transparent algorithms that provide diverse content and prevent the reinforcement of biases.COM-TEL: Telecommunications Companies
Telecommunications Companies provide communication services such as telephone, internet, and data transmission. They build and maintain the infrastructure that enables connectivity and digital communication globally. These companies are accountable for using AI ethically to manage networks, improve services, and protect user data. They must ensure that AI applications do not infringe on privacy rights, enable unlawful surveillance, or discriminate against certain users. Examples include employing AI to optimize network performance, enhancing service quality without accessing or exploiting user communications. Using AI-driven security measures to protect networks from cyber threats while respecting legal obligations regarding data privacy.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights in media and communication. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to enhance information dissemination, foster connectivity, and enrich cultural experiences while safeguarding individual rights, promoting diversity, and ensuring accurate and fair communication.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you. - EDU: Education and ResearchThe Education and Research sector encompasses institutions and organizations dedicated to teaching, learning, and scholarly investigation. This includes schools, universities, research institutes, and think tanks. The EDU sector plays a pivotal role in advancing knowledge, fostering innovation, and shaping the minds of future generations.
EDU-INS: Educational Institutions
Educational Institutions include schools, colleges, and universities that provide formal education to students at various levels. They are responsible for delivering curricula, facilitating learning, and nurturing critical thinking skills. The EDU-INS sector is accountable for ensuring that AI is used ethically within educational settings. This commitment involves promoting equitable access to AI resources, protecting student data privacy, and preventing biases in AI-driven educational tools. By integrating ethical considerations into their use of AI, they can enhance learning outcomes while safeguarding students' rights. Examples include implementing AI-powered personalized learning platforms that adapt to individual student needs without compromising their privacy. Another example is using AI to detect and mitigate biases in educational materials, ensuring fair representation of diverse perspectives.EDU-RES: Research Organizations
Research Organizations comprise universities, laboratories, and independent institutes engaged in scientific and scholarly research. They contribute to the advancement of knowledge across various fields, including AI and machine learning. These organizations are accountable for conducting AI research responsibly, adhering to ethical guidelines, and considering the societal implications of their work. They must ensure that their research does not contribute to human rights abuses and instead advances human welfare. Examples include conducting interdisciplinary research on AI ethics to inform policy and practice. Developing AI technologies that address social challenges, such as healthcare disparities or environmental sustainability, while ensuring that these technologies are accessible and do not exacerbate inequalities.EDU-POL: Educational Policy Makers
Educational Policy Makers include government agencies, educational boards, and regulatory bodies that develop policies and standards for the education sector. They shape the educational landscape through legislation, funding, and oversight. They are accountable for creating policies that promote the ethical use of AI in education and research. This includes establishing guidelines for data privacy, equity in access to AI resources, and integration of AI ethics into curricula. Examples include drafting regulations that protect student data collected by AI tools, ensuring it is used appropriately and securely. Mandating the inclusion of AI ethics courses in educational programs to prepare students for responsible AI development and use.EDU-TEC: Educational Technology Providers
Educational Technology Providers are companies and organizations that develop and supply technological tools and platforms for education. They create software, hardware, and AI applications that support teaching and learning processes. These providers are accountable for designing AI educational tools that are ethical, inclusive, and respect users' rights. They must prevent biases in AI algorithms, protect user data, and ensure their products do not inadvertently harm or disadvantage any group. Examples include developing AI-driven learning apps that are accessible to students with disabilities, adhering to universal design principles. Implementing robust data security measures to protect sensitive information collected through educational platforms.EDU-FND: Educational Foundations and NGOs
Educational Foundations and NGOs are non-profit organizations focused on improving education systems and outcomes. They often support educational initiatives, fund research, and advocate for policy changes. They are accountable for promoting ethical AI practices in education through funding, advocacy, and program implementation. They can influence the sector by supporting projects that prioritize human rights and ethical considerations in AI. Examples include funding research on the impacts of AI in education to inform best practices. Advocating for policies that ensure equitable access to AI technologies in under-resourced schools, bridging the digital divide.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights in education. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to enhance learning while safeguarding the rights and dignity of all learners.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you. - TECH: Technology and ITThe Technology and IT sector encompasses companies and organizations involved in the development, production, and maintenance of technology products and services. This includes technology companies, cybersecurity firms, digital platforms, educational technology companies, healthcare technology companies, legal tech companies, smart home device manufacturers, social media platforms, and telecommunications companies. The TECH sector plays a pivotal role in driving innovation, connecting people globally, and shaping how societies operate in the digital age.
TECH-COM: Technology Companies
Technology Companies are businesses that develop and sell technology products or services, such as software developers, hardware manufacturers, and IT service providers. They are at the forefront of technological advancements and influence various aspects of modern life. These companies are accountable for ensuring that AI is developed and deployed ethically, promoting transparency, fairness, and accountability. They must prevent biases in AI algorithms, protect user data, and consider the societal impact of their technologies. By integrating ethical AI practices, they can foster trust and contribute positively to society. Examples include developing AI applications that respect user privacy by minimizing data collection and implementing strong security measures. Creating AI systems that are transparent and explainable, allowing users to understand how decisions are made and challenging them if necessary.TECH-CSF: Cybersecurity Firms
Cybersecurity Firms specialize in protecting computer systems, networks, and data from digital attacks, unauthorized access, or damage. They offer services like threat detection, vulnerability assessments, and incident response. These firms are accountable for using AI ethically to enhance cybersecurity while respecting privacy and legal boundaries. They must ensure that AI tools do not infringe on users' rights or engage in unauthorized surveillance. Ethical AI use can strengthen defenses against cyber threats without compromising individual freedoms. Examples include employing AI to detect and respond to cyber threats in real-time, protecting organizations and users from harm while ensuring that monitoring activities comply with privacy laws. Providing AI-driven security solutions that help organizations safeguard data without accessing or misusing sensitive information.TECH-DGP: Digital Platforms
Digital Platforms are online businesses that facilitate interactions between users, such as e-commerce sites, content sharing services, and marketplaces. They connect buyers and sellers, content creators and consumers, and enable various online activities. These platforms are accountable for using AI ethically to manage content, personalize user experiences, and ensure safe interactions. This involves preventing algorithmic biases, protecting user data, and avoiding practices that could lead to discrimination or exploitation. Examples include using AI to recommend content or products in a way that promotes diversity and avoids reinforcing harmful stereotypes. Implementing AI moderation tools to detect and remove inappropriate or illegal content while respecting freedom of expression and avoiding censorship of legitimate speech.TECH-EDU: Educational Technology Companies
Educational Technology Companies develop tools and platforms that support teaching and learning processes. They create software, applications, and devices used in educational settings, from K-12 schools to higher education and corporate training. These companies are accountable for designing AI-powered educational tools that are accessible, inclusive, and respect students' privacy. They must prevent biases that could disadvantage certain learners and ensure that data collected is used responsibly. Examples include creating AI-driven personalized learning systems that adapt to individual students' needs without compromising their privacy. Developing educational platforms that are accessible to students with disabilities, adhering to universal design principles.TECH-HTC: Healthcare Technology Companies
Healthcare Technology Companies focus on developing technological solutions for the healthcare industry. They innovate in areas like electronic health records, telemedicine, medical imaging, and AI-driven diagnostics. These companies are accountable for ensuring that their AI technologies are safe, effective, and respect patient rights. They must obtain necessary regulatory approvals, protect patient data, and prevent biases in AI models that could lead to misdiagnosis. Examples include developing AI algorithms for medical imaging analysis that are trained on diverse datasets to provide accurate results across different populations. Implementing telehealth platforms that securely handle patient information and comply with healthcare privacy regulations.TECH-LTC: Legal Tech Companies
Legal Tech Companies provide technology solutions for legal professionals and organizations. They develop software for case management, document automation, legal research, and AI-powered analytics. These companies are accountable for creating AI tools that enhance the legal profession ethically. They must ensure their products do not perpetuate biases, maintain client confidentiality, and support the integrity of legal processes. Examples include offering AI-driven legal research platforms that provide unbiased results, helping lawyers build fair cases. Designing contract analysis tools that protect sensitive information and comply with data protection laws.TECH-SHD: Smart Home Device Manufacturers
Smart Home Device Manufacturers produce internet-connected devices used in homes, such as smart thermostats, security systems, voice assistants, and appliances. These devices often utilize AI to provide enhanced functionality and user convenience. These manufacturers are accountable for ensuring that their devices respect user privacy, are secure from unauthorized access, and do not collect excessive personal data. They must be transparent about data usage and provide users with control over their information. Examples include designing smart devices that operate effectively without constantly transmitting data to external servers, minimizing privacy risks. Implementing robust security measures to protect devices from hacking or misuse.TECH-SMP: Social Media Platforms
Social Media Platforms are online services that enable users to create and share content or participate in social networking. They play a significant role in information dissemination, communication, and shaping public discourse. These platforms are accountable for using AI ethically in content moderation, recommendation algorithms, and advertising. They must prevent the spread of misinformation, protect user data, and avoid algorithmic biases that could lead to echo chambers or discrimination. Examples include using AI to detect and remove harmful content such as hate speech or incitement to violence while respecting freedom of expression. Implementing transparent algorithms that provide diverse perspectives and prevent the reinforcement of biases.TECH-TEL: Telecommunications Companies
Telecommunications Companies provide communication services such as telephone, internet, and data transmission. They build and maintain the infrastructure that enables connectivity and digital communication globally. These companies are accountable for using AI ethically to manage networks, improve services, and protect user data. They must ensure that AI applications do not infringe on privacy rights or enable unlawful surveillance. Examples include employing AI to optimize network performance, enhancing service quality without accessing or exploiting user communications. Using AI-driven security measures to protect networks from cyber threats while respecting legal obligations regarding data privacy.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights in the technology and IT domain. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to drive innovation while safeguarding individual rights, promoting fairness, and building public trust in technological advancements.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
AI’s Potential Violations #
[Insert 300- to 500-word analysis of how AI could violate this human right.]
AI’s Potential Benefits #
[Insert 300- to 500-word analysis of how AI could advance this human right.]
Human Rights Instruments #
International Covenant on Civil and Political Rights (1966) #
G.A. Res. 2200A (XXI), International Covenant on Economic, Social and Cultural Rights, U.N. Doc. A/6316 (1966), 993 U.N.T.S. 3 (Dec. 16, 1966)
Article 19
2. Everyone shall have the right to Freedom
of expression; this right shall include FreedomFreedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
United Nations Guiding Principles (2011) #
H.R.C. Res. 17/4, Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework, U.N. Doc. A/HRC/RES/17/4 (June 16, 2011)
Principle 21
In order to account for how they address their human rights impacts, business enterprises should be prepared to communicate this externally, particularly when concerns are raised by or on behalf of affected stakeholders. Business enterprises whose operations or operating contexts pose risks of severe human rights impacts should report formally on how they address them. In all instances, communications should:
(a) Be of a form and frequency that reflect an enterprise’s human rights impacts and that are accessible to its intended audiences;
(b) Provide information that is sufficient to evaluate the adequacy of an enterprise’s response to the particular human rights impact involved;
(c) In turn not pose risks to affected stakeholders, personnel or to legitimate requirements of commercial confidentiality
Last Updated: April 17, 2025
Research Assistant: Aarianna Aughtry
Contributor: To Be Determined
Reviewer: To Be Determined
Editor: Alexander Kriebitz
Subject: Human Right
Edition: Edition 1.0 Research
Recommended Citation: "XIV.E. Right to Algorithmic Transparency and Accountability, Edition 1.0 Research." In AI & Human Rights Index, edited by Nathan C. Walker, Dirk Brand, Caitlin Corrigan, Georgina Curto Rex, Alexander Kriebitz, John Maldonado, Kanshukan Rajaratnam, and Tanya de Villiers-Botha. New York: All Tech is Human; Camden, NJ: AI Ethics Lab at Rutgers University, 2025. Accessed April 24, 2025. https://aiethicslab.rutgers.edu/Docs/xiv-e-transparency/.