Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
[Insert statement of urgency and significance for why this right relates to AI, including digital Self-determination
Sectors #
The contributors of the AI & Human Rights Index have identified the following sectors as responsible for both using AI to protect and advance this human right.
- ART: Arts and CultureThe Arts and Culture sector encompasses organizations, institutions, and individuals involved in the creation, preservation, and promotion of artistic and cultural expressions. This includes content creators, the entertainment industry, historical documentation centers, cultural institutions, museums, and arts organizations. The ART sector plays a vital role in enriching societies, fostering creativity, preserving heritage, and promoting cultural diversity and understanding.
ART-CRT: Content Creators
Content Creators are individuals or groups who produce artistic or cultural works, including visual artists, musicians, writers, filmmakers, and digital creators. They contribute to the cultural landscape by expressing ideas, emotions, and narratives through various mediums. These creators are accountable for using AI ethically in their creative processes and in how they distribute and monetize their work. This involves respecting intellectual property rights, avoiding plagiarism facilitated by AI, and ensuring that AI-generated content does not perpetuate stereotypes or infringe on cultural sensitivities. By integrating ethical AI practices, content creators can enhance their creativity while upholding artistic integrity and cultural respect. Examples include using AI tools for music composition or visual art creation as a means of inspiration, while ensuring the final work is original and not infringing on others' rights. Employing AI to analyze audience engagement data to tailor content that resonates with diverse audiences without compromising artistic vision or reinforcing harmful biases.ART-ENT: Entertainment Industry
The Entertainment Industry comprises companies and professionals involved in the production, distribution, and promotion of entertainment content, such as films, television shows, music, and live performances. This industry significantly influences culture and public opinion. These entities are accountable for using AI ethically in content creation, marketing, and distribution. They must prevent the use of AI in ways that could lead to deepfakes, unauthorized use of likenesses, or manipulation of audiences. Ethical AI use can enhance production efficiency and audience engagement while protecting individual rights and promoting responsible content. Examples include implementing AI for special effects in films that respect performers' rights and obtain necessary consents. Using AI algorithms for content recommendations that promote diversity and avoid creating echo chambers or reinforcing stereotypes.ART-HDC: Historical Documentation Centers
Historical Documentation Centers are institutions that collect, preserve, and provide access to historical records, archives, and artifacts. They play a crucial role in safeguarding cultural heritage and supporting research. These centers are accountable for using AI ethically to digitize and manage collections while respecting the provenance of artifacts and the rights of communities connected to them. They must ensure that AI does not misrepresent historical information or contribute to cultural appropriation. Examples include employing AI for digitizing and cataloging archives, making them more accessible to the public and researchers while ensuring accurate representation. Using AI to restore or reconstruct historical artifacts or documents, respecting the original context and cultural significance.ART-INS: Cultural Institutions
Cultural Institutions include organizations such as libraries, theaters, cultural centers, and galleries that promote cultural activities and education. They foster community engagement and cultural appreciation. These institutions are accountable for using AI ethically to enhance visitor experiences, manage collections, and promote inclusivity. They must prevent biases in AI applications that could exclude or misrepresent certain cultures or communities. Examples include implementing AI-powered interactive exhibits that engage visitors of all backgrounds. Using AI analytics to understand visitor demographics and preferences, informing programming that is inclusive and representative of diverse cultures.ART-MUS: Museums
Museums are institutions that collect, preserve, and exhibit artifacts of artistic, cultural, historical, or scientific significance. They educate the public and contribute to cultural preservation. Museums are accountable for using AI ethically in curation, exhibition design, and visitor engagement. This includes respecting the cultural heritage of artifacts, obtaining proper consents for use, and ensuring that AI does not distort interpretations. Examples include using AI to create virtual reality experiences that allow visitors to explore exhibits remotely, expanding access while ensuring accurate representation. Employing AI for artifact preservation techniques, such as predicting degradation and optimizing conservation efforts.ART-ORG: Arts Organizations
Arts Organizations are groups that support artists and promote the arts through funding, advocacy, education, and community programs. They play a key role in fostering artistic expression and cultural development. These organizations are accountable for using AI ethically to support artists and audiences equitably. They must ensure that AI tools do not introduce biases in grant allocations, program selections, or audience targeting. Examples include utilizing AI to analyze grant applications objectively, ensuring fair consideration for artists from diverse backgrounds. Implementing AI-driven marketing strategies that reach wider audiences without infringing on privacy or perpetuating stereotypes.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights in arts and culture. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to enhance creativity, preserve cultural heritage, promote diversity, and ensure that artistic expressions respect the rights and dignity of all individuals and communities.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you. - COM: Media and CommunicationThe Media and Communication sector encompasses organizations, platforms, and individuals involved in the creation, dissemination, and exchange of information and content. This includes content creators, arts and entertainment entities, news and media organizations, publishing and recording media, publishing industries, social media platforms, and telecommunications companies. The COM sector plays a crucial role in shaping public discourse, informing societies, and fostering connectivity, thereby influencing cultural, social, and political landscapes.
COM-CRT: Content Creators
Content Creators are individuals or groups who produce original content across various mediums, including writing, audio, video, and digital formats. They contribute to the diversity of information and entertainment available to the public. These creators are accountable for using AI ethically in content creation and distribution. This involves ensuring that AI tools do not infringe on intellectual property rights, propagate misinformation, or perpetuate biases and stereotypes. By integrating ethical AI practices, content creators can enhance creativity and reach while maintaining integrity and respecting audience rights. Examples include using AI for editing and enhancing content, such as automated video editing software, while ensuring that the final product is original and respects copyright laws. Employing AI analytics to understand audience engagement and tailor content without manipulating or exploiting user data.COM-ENT: Arts and Entertainment
The Arts and Entertainment sector includes organizations and individuals involved in producing and distributing artistic and entertainment content, such as films, music, theater, and performances. This sector significantly influences culture and societal values. These entities are accountable for using AI ethically in content production, distribution, and marketing. They must prevent the misuse of AI in creating deepfakes, unauthorized use of individuals' likenesses, or generating content that spreads harmful stereotypes. Ethical AI use can enhance production efficiency and audience engagement while promoting responsible content. Examples include implementing AI for special effects in films that respect performers' rights and obtain necessary consents. Using AI algorithms for content recommendations that promote diversity and avoid reinforcing biases or creating echo chambers.COM-NMO: News and Media Organizations
News and Media Organizations are entities that gather, produce, and distribute news and information to the public through various channels, including print, broadcast, and digital media. They play a critical role in informing the public and shaping public opinion. These organizations are accountable for using AI ethically in news gathering, content curation, and dissemination. This includes preventing the spread of misinformation, ensuring fairness and accuracy, and avoiding biases in AI-driven news algorithms. They must also respect privacy rights in data collection and protect journalistic integrity. Examples include using AI to automate fact-checking processes, enhancing the accuracy of reporting. Implementing AI algorithms for personalized news feeds that provide balanced perspectives and avoid creating filter bubbles.COM-PRM: Publishing and Recording Media
Publishing and Recording Media entities are involved in producing and distributing written, audio, and visual content, including books, music recordings, podcasts, and other media formats. They support artists and authors in reaching audiences. These entities are accountable for using AI ethically in content production, distribution, and rights management. They must respect intellectual property rights, ensure fair compensation for creators, and prevent unauthorized reproduction or distribution facilitated by AI. Examples include employing AI to convert books into audiobooks using synthetic voices, ensuring that proper licenses and consents are obtained. Using AI to detect and prevent piracy or unauthorized sharing of digital content.COM-PUB: Publishing Industries
The Publishing Industries focus on producing and disseminating literature, academic works, and informational content across various platforms. They contribute to education, culture, and the preservation of knowledge. These industries are accountable for using AI ethically in editing, production, and distribution processes. They must prevent biases in AI tools used for content selection or editing that could marginalize certain voices or perspectives. They should also respect authors' rights and ensure that AI does not infringe on intellectual property. Examples include using AI for manuscript editing and proofreading, enhancing efficiency while ensuring that the author's voice and intent are preserved. Implementing AI to recommend books to readers, promoting a diverse range of authors and topics.COM-SMP: Social Media Platforms
Social Media Platforms are online services that enable users to create and share content or participate in social networking. They have a significant impact on communication, information dissemination, and social interaction. These platforms are accountable for using AI ethically in content moderation, recommendation algorithms, and advertising. They must prevent the spread of misinformation, hate speech, and harmful content, protect user data, and avoid algorithmic biases that could lead to echo chambers or discrimination. Examples include using AI to detect and remove harmful content such as harassment or incitement to violence while respecting freedom of expression. Implementing transparent algorithms that provide diverse content and prevent the reinforcement of biases.COM-TEL: Telecommunications Companies
Telecommunications Companies provide communication services such as telephone, internet, and data transmission. They build and maintain the infrastructure that enables connectivity and digital communication globally. These companies are accountable for using AI ethically to manage networks, improve services, and protect user data. They must ensure that AI applications do not infringe on privacy rights, enable unlawful surveillance, or discriminate against certain users. Examples include employing AI to optimize network performance, enhancing service quality without accessing or exploiting user communications. Using AI-driven security measures to protect networks from cyber threats while respecting legal obligations regarding data privacy.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights in media and communication. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to enhance information dissemination, foster connectivity, and enrich cultural experiences while safeguarding individual rights, promoting diversity, and ensuring accurate and fair communication.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you. - FIN: Financial ServicesThe Financial Services sector encompasses institutions and organizations involved in managing money, providing financial products, and facilitating economic transactions. This includes banking, insurance, investment firms, mortgage lenders, and financial technology companies. The FIN sector plays a crucial role in the global economy by enabling financial intermediation, promoting economic growth, and supporting individuals and businesses in managing their financial needs.
FIN-BNK: Banking and Financial Services
Banking and Financial Services include institutions that accept deposits, provide loans, and offer a range of financial services to individuals, businesses, and governments. They are central to payment systems, credit allocation, and financial stability. The FIN-BNK sector is accountable for ensuring that AI is used ethically within banking operations. This commitment involves preventing discriminatory practices, protecting customer data, and promoting financial inclusion. Banks must ensure that AI algorithms used in credit scoring, fraud detection, and customer service do not infringe on human rights. Examples include implementing AI-driven credit assessment tools that are transparent and free from biases, ensuring fair access to loans for all customers. Using AI-powered fraud detection systems to protect customers from financial crimes while respecting their privacy and data protection rights.FIN-FIN: Financial Technology Companies
Financial Technology (FinTech) Companies use innovative technology to provide financial services more efficiently and effectively. They offer digital payment solutions, peer-to-peer lending, crowdfunding platforms, and other disruptive financial products. These companies are accountable for ensuring that their AI applications do not exploit consumers, compromise data security, or exclude underserved populations. They must adhere to ethical standards, promote transparency, and protect user data to advance human rights in the digital financial landscape. Examples include developing AI-powered financial management apps that offer personalized advice while safeguarding user data and ensuring confidentiality. Using AI to expand access to financial services in remote or underserved areas, helping to reduce economic inequality.FIN-INS: Insurance Companies
Insurance Companies provide risk management services by offering policies that protect individuals and businesses from financial losses due to unforeseen events. They assess risks, collect premiums, and process claims. The FIN-INS sector is accountable for using AI ethically in underwriting and claims processing. This includes preventing biases in risk assessment algorithms that could lead to unfair denial of coverage or discriminatory pricing. They must ensure that AI enhances fairness and transparency in their services. Examples include utilizing AI algorithms that evaluate risk factors without discriminating based on race, gender, or socioeconomic status. Implementing AI-driven claims processing systems that expedite payouts to policyholders while ensuring accurate and fair assessments.FIN-INV: Investment Firms
Investment Firms manage assets on behalf of clients, investing in stocks, bonds, real estate, and other assets to generate returns. They provide financial advice, portfolio management, and wealth planning services. These firms are accountable for ensuring that AI algorithms used in trading and investment decisions are transparent, ethical, and do not manipulate markets. They should consider the social and environmental impact of their investment strategies, promoting responsible investing. Examples include employing AI for market analysis and portfolio optimization while avoiding practices that could lead to market instability or unfair advantages. Using AI to identify and invest in companies with strong environmental, social, and governance (ESG) practices, supporting sustainable development.FIN-MTG: Mortgage Lenders
Mortgage Lenders provide loans to individuals and businesses for the purchase of real estate. They play a vital role in enabling homeownership and supporting the property market. The FIN-MTG sector is accountable for using AI in loan approval processes ethically, ensuring that algorithms do not discriminate against applicants based on unlawful criteria. They must promote fair lending practices and protect applicants' personal information. Examples include implementing AI-driven underwriting systems that assess creditworthiness fairly, giving equal opportunity for homeownership regardless of race, gender, or other protected characteristics. Using AI to streamline the mortgage application process, making it more accessible and efficient while maintaining data privacy and security.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights in financial services. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to promote financial inclusion, protect consumers, and ensure fairness and transparency in financial activities.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you. - REG: Regulatory and Oversight BodiesThe Regulatory and Oversight Bodies sector encompasses organizations responsible for creating, implementing, and enforcing regulations, as well as monitoring compliance across various industries. This includes regulatory agencies, data protection authorities, ethics committees, oversight bodies, and other regulatory entities. The REG sector plays a critical role in ensuring that laws and standards are upheld, protecting public interests, promoting fair practices, and safeguarding human rights in the context of technological advancements like artificial intelligence (AI).
REG-AGY: Regulatory Agencies
Regulatory Agencies are government-appointed bodies tasked with creating and enforcing rules and regulations within specific industries or sectors. They oversee compliance with laws, issue licenses, conduct inspections, and take enforcement actions when necessary. These agencies are accountable for ensuring that AI technologies within their jurisdictions are developed and used ethically and responsibly. This involves setting standards for AI deployment, preventing abuses, and promoting practices that advance human rights. By regulating AI effectively, they help prevent harm and foster public trust in technological innovations. Examples include establishing guidelines for AI transparency and accountability in industries like finance or healthcare, ensuring that AI systems do not discriminate or violate privacy rights. Enforcing regulations that require companies to conduct human rights impact assessments before deploying AI technologies.REG-DPA: Data Protection Authorities
Data Protection Authorities are specialized regulatory bodies responsible for overseeing the implementation of data protection laws and safeguarding individuals' personal information. They monitor compliance, handle complaints, and have the power to enforce penalties for violations. These authorities are accountable for ensuring that AI systems handling personal data comply with data protection principles such as lawfulness, fairness, transparency, and data minimization. They play a crucial role in preventing privacy infringements and promoting the ethical use of AI in processing personal information. Examples include reviewing and approving AI data processing activities to ensure they meet legal requirements. Investigating breaches involving AI systems and imposing sanctions on organizations that misuse personal data or fail to protect it adequately.REG-ETH: Ethics Committees
Ethics Committees are groups of experts who evaluate the ethical implications of policies, research projects, or technological developments. They provide guidance, assess compliance with ethical standards, and make recommendations to ensure responsible conduct. These committees are accountable for scrutinizing AI initiatives to identify potential ethical issues, such as biases, unfair treatment, or risks to human dignity. By promoting ethical considerations in AI development and deployment, they help prevent human rights abuses and encourage technologies that benefit society. Examples include reviewing AI research proposals to ensure they respect participants' rights and obtain informed consent. Providing guidance on ethical AI practices for organizations, helping them integrate ethical principles into their AI strategies and operations.REG-OVS: Oversight Bodies
Oversight Bodies are organizations or committees tasked with monitoring and evaluating the activities of institutions, agencies, or specific sectors to ensure accountability and compliance with laws and regulations. They may be independent or part of a governmental framework. These bodies are accountable for overseeing the use of AI across various domains, ensuring that organizations adhere to legal and ethical standards. They help detect and address potential abuses, promoting transparency and fostering public confidence in AI technologies. Examples include auditing government agencies' use of AI to verify compliance with human rights obligations and data protection laws. Recommending corrective actions or policy changes when AI applications are found to have negative impacts on individuals or communities.REG-RBY: Regulatory Bodies
Regulatory Bodies are official organizations that establish and enforce rules within specific professional fields or industries. They set standards, issue certifications, and may discipline members who do not comply with established norms. These bodies are accountable for incorporating AI considerations into their regulatory frameworks, ensuring that professionals using AI adhere to ethical guidelines and best practices. They play a key role in preventing malpractice and promoting the responsible use of AI. Examples include a medical board setting standards for AI-assisted diagnostics, ensuring that healthcare providers use AI tools that are safe, effective, and respect patient rights. A legal bar association providing guidelines on AI use in legal practice to prevent biases and maintain client confidentiality.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights. Their accountability lies in the responsible development, enforcement, and oversight of regulations and standards governing AI technologies. Through diligent regulation and monitoring, they ensure that AI is used to benefit society while safeguarding individual rights and upholding public trust.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you. - WORK: Employment and LaborThe Employment and Labor sector encompasses organizations, institutions, and entities involved in the facilitation of employment, protection of workers' rights, development of workforce skills, and management of labor relations. This includes employment agencies, government employment services, gig economy workers' associations, human resources departments, job training and placement services, labor unions, vocational training centers, and workers' rights organizations. The WORK sector plays a crucial role in promoting fair labor practices, enhancing employment opportunities, and ensuring that workers' rights are respected and upheld.
WORK-EMP: Employment Agencies
Employment Agencies are organizations that connect job seekers with employers. They provide services such as job placement, career counseling, and recruitment for temporary or permanent positions across various industries. These agencies are accountable for using AI ethically to match candidates with job opportunities fairly and efficiently. This involves preventing biases in AI algorithms that could discriminate against applicants based on race, gender, age, or other protected characteristics. By integrating ethical AI practices, employment agencies can promote equal employment opportunities and enhance diversity in the workplace. Examples include utilizing AI-powered applicant tracking systems that screen resumes objectively, ensuring that all qualified candidates are considered without bias. Implementing AI tools to match job seekers with suitable positions based on skills and preferences while protecting personal data and respecting privacy.WORK-GES: Government Employment Services
Government Employment Services are public agencies that provide assistance to job seekers and employers. They offer services like job listings, unemployment benefits administration, career counseling, and workforce development programs. These services are accountable for using AI ethically to improve service delivery and accessibility while upholding the rights of job seekers. They must ensure that AI applications do not introduce barriers to employment or unfairly disadvantage certain groups. Ethical AI use can enhance the efficiency of employment services and support economic inclusion. Examples include employing AI to analyze labor market trends and identify sectors with job growth, informing policy decisions and training programs. Using AI-driven platforms to connect job seekers with opportunities, ensuring that services are accessible to individuals with disabilities or limited digital literacy.WORK-GIG: Gig Economy Workers' Associations
Gig Economy Workers' Associations represent the interests of individuals engaged in short-term, freelance, or contract work, often facilitated through digital platforms. They advocate for fair treatment, reasonable pay, and access to benefits for gig workers. These associations are accountable for promoting ethical AI use within gig platforms to protect workers' rights. This includes ensuring that AI algorithms used for task allocation, performance evaluation, or payment do not exploit workers or perpetuate unfair practices. Examples include advocating for transparency in AI algorithms that determine job assignments or ratings, allowing workers to understand and contest decisions that affect their income. Working with platforms to implement AI systems that ensure fair distribution of work and prevent discrimination.WORK-HRD: Human Resources Departments
Human Resources Departments within organizations manage employee relations, recruitment, training, benefits, and compliance with labor laws. They play a key role in shaping workplace culture and practices. These departments are accountable for using AI ethically in HR processes, such as recruitment, performance evaluation, and employee engagement. They must prevent biases in AI tools that could lead to discriminatory hiring or unfair treatment of employees. Examples include implementing AI-driven recruitment software that screens candidates based on relevant qualifications without considering irrelevant factors like gender or ethnicity. Using AI for employee feedback analysis to improve workplace conditions while ensuring confidentiality and data protection.WORK-JOB: Job Training and Placement Services
Job Training and Placement Services provide education, skills development, and assistance in finding employment. They help individuals enhance their employability and connect with job opportunities. These services are accountable for using AI ethically to tailor training programs to individual needs and match candidates with suitable jobs. They must ensure that AI applications do not exclude or disadvantage certain learners and protect participants' personal information. Examples include using AI to assess skill gaps and recommend personalized training pathways, improving employment outcomes without compromising privacy. Employing AI to match trainees with employers seeking specific skills, promoting efficient job placement while ensuring fairness.WORK-LBU: Labor Unions
Labor Unions are organizations that represent workers in negotiations with employers over wages, benefits, working conditions, and other employment terms. They advocate for workers' rights and interests. These unions are accountable for leveraging AI ethically to support their advocacy efforts while protecting members' rights. This includes using AI to analyze labor data without violating privacy and ensuring that AI tools do not replace human judgment in critical decisions. Examples include employing AI to identify trends in workplace issues, informing collective bargaining strategies while safeguarding members' personal information. Using AI-driven communication platforms to engage with members effectively, ensuring inclusivity and accessibility.WORK-VTC: Vocational Training Centers
Vocational Training Centers provide education and training focused on specific trades or professions. They equip individuals with practical skills required for particular jobs, supporting workforce development. These centers are accountable for using AI ethically to enhance learning experiences and outcomes. They must ensure that AI-powered educational tools are accessible, inclusive, and do not perpetuate biases or inequalities. Examples include implementing AI-driven tutoring systems that adapt to learners' needs, supporting diverse learning styles without compromising data privacy. Using AI analytics to track student progress and inform instructional strategies while respecting confidentiality.WORK-WRO: Workers' Rights Organizations
Workers' Rights Organizations advocate for the protection and advancement of labor rights. They monitor compliance with labor laws, support workers facing discrimination or exploitation, and promote fair labor practices globally. These organizations are accountable for using AI ethically to strengthen their advocacy efforts and protect workers. This involves ensuring that AI tools respect privacy, prevent biases, and do not inadvertently harm those they aim to support. Examples include using AI to analyze large datasets on labor conditions, identifying patterns of abuse or violations without exposing individual workers to retaliation. Employing AI-powered platforms to disseminate information on workers' rights, making resources accessible to a wider audience while ensuring data security.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights in employment and labor. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to promote fair labor practices, enhance employment opportunities, and protect workers' rights. Through ethical AI use, they can foster inclusive workplaces, support workforce development, and ensure that technological advancements benefit all members of society.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
AI’s Potential Violations #
[Insert 300- to 500-word analysis of how AI could violate this human right.]
AI’s Potential Benefits #
[Insert 300- to 500-word analysis of how AI could advance this human right.]
Human Rights Instruments #
United Nations Charter (1945) #
1 U.N.T.S. XVI, U.N. Charter (June 26, 1945)
Preamble
to save succeeding generations from the scourge of war, which twice in our lifetime has brought untold sorrow to mankind, and
to reaffirm faith in fundamental human rights, in the Dignity
and worth of the human person, in the equal rights of men and women and of nations large and small, andHuman dignity refers to the inherent worth and respect that every individual possesses, irrespective of their status, identity, or achievements. In the context of artificial intelligence (AI), dignity emphasizes the need for AI systems to be designed, developed, and deployed in ways that respect, preserve, and even enhance this intrinsic human value. While many existing AI ethics guidelines reference dignity, they often leave it undefined, highlighting instead its close relationship to human rights and its role in avoiding harm, forced acceptance, automated classification, and unconsented interactions between humans and AI. Fundamentally, dignity serves as a cornerstone of ethical AI practices, requiring systems to prioritize human well-being and autonomy. The preservation of dignity in AI systems places significant ethical responsibilities on developers, organizations, and policymakers. Developers play a pivotal role in ensuring that AI technologies respect privacy and autonomy by safeguarding personal data and avoiding manipulative practices. Bias mitigation is another critical responsibility, as AI systems must strive to eliminate discriminatory outcomes that could undermine the dignity of individuals based on race, gender, age, or other characteristics. Furthermore, transparency and accountability in AI operations are essential for upholding dignity, as they provide mechanisms to understand and address the impacts of AI systems on individuals and communities. Governance and legislation are equally important in safeguarding human dignity in the AI landscape. New legal frameworks and regulations can mandate ethical development and deployment practices, with a focus on protecting human rights and dignity. Government-issued technical and methodological guidelines can provide developers with clear standards for ethical AI design. Additionally, international cooperation is essential to establish a unified, global approach to AI ethics, recognizing the cross-border implications of AI technologies. By embedding dignity into AI systems and governance structures, society can ensure that AI technologies respect and enhance human worth, fostering trust, equity, and ethical innovation. Recommended Reading Anna Jobin, Marcello Ienca, and Effy Vayena. "The Global Landscape of AI Ethics Guidelines." Nature Machine Intelligence 1 (2019): 389–399.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.to establish conditions under which Justice
and respect for the obligations arising from treaties and other sources of international law can be maintained, andDisclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.to promote social progress and better standards of life in larger Freedom
,Freedom means having the autonomy to make choices and act without undue interference or coercion from others. In the context of artificial intelligence (AI) and human rights, it highlights each person’s right to self-determination and control over their own life and personal data, even as AI systems increasingly influence our daily decisions. Freedom is a cornerstone of human rights and a foundational principle in AI ethics, ensuring that technology upholds individual autonomy rather than undermining it. The rise of AI has a direct impact on fundamental freedoms—from freedom of expression online to the right to privacy—making it crucial that AI is developed and used in ways that respect and protect these rights. Legal frameworks worldwide recognize these freedoms; for example, data protection laws like the General Data Protection Regulation (GDPR) give individuals more control over their personal information, reinforcing their freedom from unwarranted surveillance or data misuse. In practice, this means AI systems should be designed to empower users—allowing people to access information, form opinions, and make choices without being manipulated or unjustly restricted by algorithms.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.to practice tolerance and live together in peace with one another as good neighbours, and
to unite our strength to maintain international peace and Security
, andSecurity in artificial intelligence (AI) refers to the principle that AI systems must be designed to resist external threats and protect the integrity, confidentiality, and availability of data and functionality. Security ensures AI systems are safeguarded against unauthorized access, manipulation, or exploitation, maintaining trust and reliability in AI technologies. This principle is particularly critical in sensitive domains such as finance, healthcare, and critical infrastructure, where vulnerabilities can have far-reaching consequences. Effective AI security emphasizes proactive measures, such as testing system resilience, sharing information about cyber threats, and implementing robust data protection strategies. Techniques like anonymization, de-identification, and data aggregation reduce risks to personal and sensitive information. Security by design—embedding security measures at every stage of an AI system’s lifecycle—is a cornerstone of this principle. This includes deploying fallback mechanisms, secure software protocols, and continuous monitoring to detect and address potential threats. These measures not only protect AI systems but also foster trust among users and stakeholders by ensuring their safe and ethical operation. Challenges to achieving AI security include the increasing complexity of AI models, the sophistication of cyber threats, and the need to balance security with transparency and usability. As AI technologies often operate across borders, international cooperation is essential to establish and enforce global security standards. Collaborative efforts among governments, private sector actors, and civil society can create unified frameworks to address cross-border threats and ensure the ethical deployment of secure AI systems. Ultimately, the principle of security safeguards individual and organizational assets while upholding broader societal trust in AI. By prioritizing security in design, deployment, and governance, developers and policymakers can ensure AI technologies serve humanity responsibly and reliably. For Further Reading Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020.
Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.to ensure, by the acceptance of principles and the institution of methods, that armed force shall not be used, save in the common interest, and
to employ international machinery for the promotion of the economic and social advancement of all peoples,
Chapter 1, Article 1
2. To develop friendly relations among nations based on respect for the principle of equal rights and Self-determination
of peoples, and to take other appropriate measures to strengthen universal peace;Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
International Covenant on Civil and Political Rights (1966) #
G.A. Res. 2200A (XXI), International Covenant on Civil and Political Rights, U.N. Doc. A/6316 (1966), 999 U.N.T.S. 171 (Dec. 16, 1966)
Article 1
1. All peoples have the right of Self-determination
. By virtue of that right they freely determine their political status and freely pursue their economic, social and cultural development.Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.2. All peoples may, for their own ends, freely dispose of their natural wealth and resources without prejudice to any obligations arising out of international economic co-operation, based upon the principle of mutual benefit, and international law. In no case may a people be deprived of its own means of subsistence.
3. The States Parties to the present Covenant, including those having responsibility for the administration of Non-Self-Governing and Trust
Territories, shall promote the realization of the right of Self-determinationTrust in artificial intelligence (AI) refers to the confidence users and stakeholders have in the reliability, safety, and ethical integrity of AI systems. It is a foundational principle in AI ethics and governance, essential for public acceptance and the responsible integration of AI technologies into society. Building trust requires AI systems to demonstrate transparency, fairness, and accountability throughout their design, deployment, and operation. A trustworthy AI system must consistently meet user expectations, deliver reliable outcomes, and align with societal values and norms. Trust extends beyond technical functionality to encompass ethical design principles and governance frameworks. Reliable and safe operation, protection of user privacy, and harm prevention are critical for fostering trust. Transparent and explainable systems enable users to understand AI decision-making processes, while fairness and non-discrimination ensure that AI does not perpetuate biases. Trust-building measures, such as certification processes (e.g., "Certificate of Fairness"), stakeholder engagement, and multi-stakeholder dialogues, play an important role in addressing diverse concerns and expectations. However, trust must be balanced with informed skepticism to prevent blind reliance on AI, especially in high-stakes applications like healthcare, law enforcement, and finance. Over-reliance on AI can lead to unintended consequences, including ethical lapses and harm. Maintaining trust requires continuous monitoring, robust accountability mechanisms, and adaptive governance structures to address emerging challenges and evolving technologies. Trust in AI is not a static attribute but an ongoing process. It necessitates collaboration among developers, users, and regulators to uphold ethical standards, protect societal values, and ensure that AI systems serve humanity responsibly and equitably. Recommended Reading Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. "Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI ." Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you., and shall respect that right, in conformity with the provisions of the Charter of the United Nations.
United Nations Declaration on the Rights of Indigenous Peoples (2007) #
G.A. Res. 61/295, United Nations Declaration on the Rights of Indigenous Peoples, U.N. Doc. A/RES/61/295 (Sept. 13, 2007)
Article 3
Indigenous peoples have the right to Self-determination
. By virtue of that right they freely determine their political status and freely pursue their economic, social and cultural development.Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
Last Updated: April 17, 2025
Research Assistant: Aarianna Aughtry
Contributor: To Be Determined
Reviewer: To Be Determined
Editor: Alexander Kriebitz
Subject: Human Right
Edition: Edition 1.0 Research
Recommended Citation: "VIII.D. Right to Self-Determination, Edition 1.0 Research." In AI & Human Rights Index, edited by Nathan C. Walker, Dirk Brand, Caitlin Corrigan, Georgina Curto Rex, Alexander Kriebitz, John Maldonado, Kanshukan Rajaratnam, and Tanya de Villiers-Botha. New York: All Tech is Human; Camden, NJ: AI Ethics Lab at Rutgers University, 2025. Accessed April 24, 2025. https://aiethicslab.rutgers.edu/Docs/viii-d-determination/.