Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
[Insert statement of urgency and significance for why this right relates to AI.]
Sectors #
The contributors of the AI & Human Rights Index have identified the following sectors as responsible for both using AI to protect and advance this human right.
- ART: Arts and CultureThe Arts and Culture sector encompasses organizations, institutions, and individuals involved in the creation, preservation, and promotion of artistic and cultural expressions. This includes content creators, the entertainment industry, historical documentation centers, cultural institutions, museums, and arts organizations. The ART sector plays a vital role in enriching societies, fostering creativity, preserving heritage, and promoting cultural diversity and understanding.
ART-CRT: Content Creators
Content Creators are individuals or groups who produce artistic or cultural works, including visual artists, musicians, writers, filmmakers, and digital creators. They contribute to the cultural landscape by expressing ideas, emotions, and narratives through various mediums. These creators are accountable for using AI ethically in their creative processes and in how they distribute and monetize their work. This involves respecting intellectual property rights, avoiding plagiarism facilitated by AI, and ensuring that AI-generated content does not perpetuate stereotypes or infringe on cultural sensitivities. By integrating ethical AI practices, content creators can enhance their creativity while upholding artistic integrity and cultural respect. Examples include using AI tools for music composition or visual art creation as a means of inspiration, while ensuring the final work is original and not infringing on others' rights. Employing AI to analyze audience engagement data to tailor content that resonates with diverse audiences without compromising artistic vision or reinforcing harmful biases.ART-ENT: Entertainment Industry
The Entertainment Industry comprises companies and professionals involved in the production, distribution, and promotion of entertainment content, such as films, television shows, music, and live performances. This industry significantly influences culture and public opinion. These entities are accountable for using AI ethically in content creation, marketing, and distribution. They must prevent the use of AI in ways that could lead to deepfakes, unauthorized use of likenesses, or manipulation of audiences. Ethical AI use can enhance production efficiency and audience engagement while protecting individual rights and promoting responsible content. Examples include implementing AI for special effects in films that respect performers' rights and obtain necessary consents. Using AI algorithms for content recommendations that promote diversity and avoid creating echo chambers or reinforcing stereotypes.ART-HDC: Historical Documentation Centers
Historical Documentation Centers are institutions that collect, preserve, and provide access to historical records, archives, and artifacts. They play a crucial role in safeguarding cultural heritage and supporting research. These centers are accountable for using AI ethically to digitize and manage collections while respecting the provenance of artifacts and the rights of communities connected to them. They must ensure that AI does not misrepresent historical information or contribute to cultural appropriation. Examples include employing AI for digitizing and cataloging archives, making them more accessible to the public and researchers while ensuring accurate representation. Using AI to restore or reconstruct historical artifacts or documents, respecting the original context and cultural significance.ART-INS: Cultural Institutions
Cultural Institutions include organizations such as libraries, theaters, cultural centers, and galleries that promote cultural activities and education. They foster community engagement and cultural appreciation. These institutions are accountable for using AI ethically to enhance visitor experiences, manage collections, and promote inclusivity. They must prevent biases in AI applications that could exclude or misrepresent certain cultures or communities. Examples include implementing AI-powered interactive exhibits that engage visitors of all backgrounds. Using AI analytics to understand visitor demographics and preferences, informing programming that is inclusive and representative of diverse cultures.ART-MUS: Museums
Museums are institutions that collect, preserve, and exhibit artifacts of artistic, cultural, historical, or scientific significance. They educate the public and contribute to cultural preservation. Museums are accountable for using AI ethically in curation, exhibition design, and visitor engagement. This includes respecting the cultural heritage of artifacts, obtaining proper consents for use, and ensuring that AI does not distort interpretations. Examples include using AI to create virtual reality experiences that allow visitors to explore exhibits remotely, expanding access while ensuring accurate representation. Employing AI for artifact preservation techniques, such as predicting degradation and optimizing conservation efforts.ART-ORG: Arts Organizations
Arts Organizations are groups that support artists and promote the arts through funding, advocacy, education, and community programs. They play a key role in fostering artistic expression and cultural development. These organizations are accountable for using AI ethically to support artists and audiences equitably. They must ensure that AI tools do not introduce biases in grant allocations, program selections, or audience targeting. Examples include utilizing AI to analyze grant applications objectively, ensuring fair consideration for artists from diverse backgrounds. Implementing AI-driven marketing strategies that reach wider audiences without infringing on privacy or perpetuating stereotypes.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights in arts and culture. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to enhance creativity, preserve cultural heritage, promote diversity, and ensure that artistic expressions respect the rights and dignity of all individuals and communities.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you. - EDU: Education and ResearchThe Education and Research sector encompasses institutions and organizations dedicated to teaching, learning, and scholarly investigation. This includes schools, universities, research institutes, and think tanks. The EDU sector plays a pivotal role in advancing knowledge, fostering innovation, and shaping the minds of future generations.
EDU-INS: Educational Institutions
Educational Institutions include schools, colleges, and universities that provide formal education to students at various levels. They are responsible for delivering curricula, facilitating learning, and nurturing critical thinking skills. The EDU-INS sector is accountable for ensuring that AI is used ethically within educational settings. This commitment involves promoting equitable access to AI resources, protecting student data privacy, and preventing biases in AI-driven educational tools. By integrating ethical considerations into their use of AI, they can enhance learning outcomes while safeguarding students' rights. Examples include implementing AI-powered personalized learning platforms that adapt to individual student needs without compromising their privacy. Another example is using AI to detect and mitigate biases in educational materials, ensuring fair representation of diverse perspectives.EDU-RES: Research Organizations
Research Organizations comprise universities, laboratories, and independent institutes engaged in scientific and scholarly research. They contribute to the advancement of knowledge across various fields, including AI and machine learning. These organizations are accountable for conducting AI research responsibly, adhering to ethical guidelines, and considering the societal implications of their work. They must ensure that their research does not contribute to human rights abuses and instead advances human welfare. Examples include conducting interdisciplinary research on AI ethics to inform policy and practice. Developing AI technologies that address social challenges, such as healthcare disparities or environmental sustainability, while ensuring that these technologies are accessible and do not exacerbate inequalities.EDU-POL: Educational Policy Makers
Educational Policy Makers include government agencies, educational boards, and regulatory bodies that develop policies and standards for the education sector. They shape the educational landscape through legislation, funding, and oversight. They are accountable for creating policies that promote the ethical use of AI in education and research. This includes establishing guidelines for data privacy, equity in access to AI resources, and integration of AI ethics into curricula. Examples include drafting regulations that protect student data collected by AI tools, ensuring it is used appropriately and securely. Mandating the inclusion of AI ethics courses in educational programs to prepare students for responsible AI development and use.EDU-TEC: Educational Technology Providers
Educational Technology Providers are companies and organizations that develop and supply technological tools and platforms for education. They create software, hardware, and AI applications that support teaching and learning processes. These providers are accountable for designing AI educational tools that are ethical, inclusive, and respect users' rights. They must prevent biases in AI algorithms, protect user data, and ensure their products do not inadvertently harm or disadvantage any group. Examples include developing AI-driven learning apps that are accessible to students with disabilities, adhering to universal design principles. Implementing robust data security measures to protect sensitive information collected through educational platforms.EDU-FND: Educational Foundations and NGOs
Educational Foundations and NGOs are non-profit organizations focused on improving education systems and outcomes. They often support educational initiatives, fund research, and advocate for policy changes. They are accountable for promoting ethical AI practices in education through funding, advocacy, and program implementation. They can influence the sector by supporting projects that prioritize human rights and ethical considerations in AI. Examples include funding research on the impacts of AI in education to inform best practices. Advocating for policies that ensure equitable access to AI technologies in under-resourced schools, bridging the digital divide.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights in education. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to enhance learning while safeguarding the rights and dignity of all learners.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you. - GOV: Government and Public SectorThe Government and Public Sector encompasses all institutions and organizations that are part of the governmental framework at the local, regional, and national levels. This includes government agencies, civil registration services, economic planning bodies, public officials, public services, regulatory bodies, and government surveillance entities. The GOV sector is responsible for creating and implementing policies, providing public services, and upholding the rule of law. It plays a vital role in shaping society, promoting the welfare of citizens, and ensuring the effective functioning of the state.
GOV-AGY: Government Agencies
Government Agencies are administrative units of the government responsible for specific functions such as health, education, transportation, and environmental protection. They implement laws, deliver public services, and regulate various sectors. The GOV-AGY sector is accountable for ensuring that AI is used ethically in public administration. This includes promoting transparency, protecting citizens' data, and preventing biases in AI systems that could lead to unfair treatment. By integrating ethical AI practices, government agencies can enhance service delivery while upholding human rights. Examples include using AI-powered chatbots to improve citizen access to information and services while ensuring data privacy and security. Implementing AI in processing applications or claims efficiently, without discriminating against any group based on race, gender, or socioeconomic status.GOV-CRS: Civil Registration Services
Civil Registration Services are responsible for recording vital events such as births, deaths, marriages, and divorces. They maintain official records essential for legal identity and access to services. These services are accountable for using AI ethically to manage and protect personal data. They must ensure that AI systems used in data processing do not compromise the privacy or security of individuals' sensitive information. Ethical AI use can improve accuracy and efficiency in maintaining civil records. Examples include employing AI to detect and correct errors in civil records, ensuring that individuals' legal identities are accurately reflected. Using AI to streamline the registration process, making it more accessible while safeguarding personal data against unauthorized access.GOV-ECN: Economic Planning Bodies
Economic Planning Bodies are government entities that develop strategies for economic growth, resource allocation, and development policies. They analyze economic data to inform decision-making and promote national prosperity. The GOV-ECN sector is accountable for using AI in economic planning ethically. This involves ensuring that AI models do not perpetuate economic disparities or exclude marginalized communities from development benefits. By applying ethical AI, they can promote inclusive and sustainable economic growth. Examples include utilizing AI for economic forecasting to make informed policy decisions that benefit all segments of society. Implementing AI to assess the potential impact of economic policies on different demographics, thereby promoting equity and reducing inequality.GOV-PPM: Public Officials
Public Officials include elected representatives and appointed officers who hold positions of authority within the government. They are responsible for making decisions, enacting laws, and overseeing the implementation of policies. Public officials are accountable for promoting the ethical use of AI in governance. They must ensure that AI technologies are used to enhance democratic processes, increase transparency, and protect citizens' rights. Their leadership is crucial in setting ethical standards and regulations for AI deployment. Examples include advocating for legislation that regulates AI use to prevent abuses such as mass surveillance or algorithmic discrimination. Using AI tools to engage with constituents more effectively, such as sentiment analysis on public feedback, while ensuring that such tools respect privacy and free speech rights.GOV-PUB: Public Services
Public Services encompass various services provided by the government to its citizens, including healthcare, education, transportation, and public safety. These services aim to meet the needs of the public and improve quality of life. The GOV-PUB sector is accountable for integrating AI into public services ethically. This involves ensuring equitable access, preventing biases, and protecting user data. Ethical AI use can enhance service efficiency and effectiveness while respecting human rights. Examples include deploying AI in public healthcare systems to predict disease outbreaks and allocate resources efficiently, without compromising patient confidentiality. Using AI in public transportation to optimize routes and schedules, improving accessibility while safeguarding passenger data.GOV-REG: Regulatory Bodies
Regulatory Bodies are government agencies tasked with overseeing specific industries or activities to ensure compliance with laws and regulations. They protect public interests by enforcing standards and addressing misconduct. These bodies are accountable for regulating the ethical use of AI across various sectors. They must develop guidelines and enforce compliance to prevent AI-related abuses, such as discrimination or privacy violations. Their role is critical in setting the framework for responsible AI deployment. Examples include establishing regulations that require transparency in AI algorithms used by companies, ensuring they do not discriminate against consumers. Monitoring and auditing AI systems to verify compliance with data protection laws and ethical standards.GOV-SUR: Government Surveillance
Government Surveillance entities are responsible for monitoring activities for purposes such as national security, law enforcement, and public safety. They collect and analyze data to detect and prevent criminal activities and threats. The GOV-SUR sector is accountable for ensuring that AI used in surveillance respects human rights, including the rights to privacy and freedom of expression. They must balance security objectives with individual freedoms, adhering to legal frameworks and ethical standards. Examples include implementing AI-driven surveillance systems with strict oversight to prevent misuse and unauthorized access. Employing AI for specific, targeted investigations with appropriate warrants and legal processes, avoiding mass surveillance practices that infringe on citizens' rights.Summary
By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights within government and public services. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to enhance governance, protect citizens, and promote transparency and fairness in public administration.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the research and review stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you. - SOC: Social Services and HousingThe Social Services and Housing sector encompasses organizations and agencies dedicated to providing support, assistance, and essential services to individuals and communities in need. This includes child welfare organizations, community support services, homeless shelters, housing authorities, non-profit organizations, social services, and welfare agencies. The SOC sector plays a vital role in promoting social welfare, reducing inequalities, and enhancing the quality of life for vulnerable populations.
SOC-CHA: Child Welfare Organizations
Child Welfare Organizations are dedicated to the well-being and protection of children. They work to prevent abuse and neglect, provide foster care and adoption services, and support families to ensure safe and nurturing environments for children. These organizations are accountable for ensuring that AI is used ethically to enhance child protection efforts while safeguarding children's rights and privacy. This includes preventing biases in AI systems that could lead to unfair treatment or discrimination against certain groups of children or families. By integrating ethical AI practices, they can improve the effectiveness of interventions and promote the best interests of the child. Examples include using AI to analyze data and identify risk factors for child abuse or neglect, enabling proactive support while ensuring data confidentiality. Implementing AI tools to match children with suitable foster families more efficiently, considering the child's needs and preferences without bias.SOC-COM: Community Support Services
Community Support Services provide assistance and resources to individuals and families within a community. They address various needs, such as counseling, education, employment support, and access to healthcare. These services are accountable for using AI ethically to enhance service delivery and accessibility while respecting clients' rights and privacy. This involves preventing discrimination, ensuring inclusivity, and protecting sensitive information. Ethical AI can help tailor support to individual needs and improve outcomes. Examples include utilizing AI-driven platforms to connect community members with appropriate services and resources based on their unique circumstances, ensuring equitable access. Employing AI to analyze community needs and trends, informing program development and resource allocation without compromising individual privacy.SOC-HOM: Homeless Shelters
Homeless Shelters provide temporary housing, food, and support services to individuals and families experiencing homelessness. They aim to meet immediate needs and assist clients in transitioning to stable housing. These shelters are accountable for using AI ethically to improve service efficiency and support clients while protecting their dignity and rights. This includes safeguarding personal data, preventing biases in service provision, and ensuring that AI does not create barriers to access. Examples include implementing AI systems to manage shelter capacity and resources effectively, ensuring that services are available when needed without disclosing personal information. Using AI to identify patterns that lead to homelessness, informing prevention strategies and policy interventions while respecting clients' privacy.SOC-HOU: Housing Authorities
Housing Authorities are government agencies or organizations that develop, manage, and provide affordable housing options for low-income individuals and families. They work to ensure access to safe, decent, and affordable housing. These authorities are accountable for using AI ethically to allocate housing resources fairly and efficiently. This involves preventing discriminatory practices in housing assignments, protecting applicants' data, and promoting transparency in decision-making processes. Examples include employing AI algorithms to assess housing applications objectively, ensuring equal opportunity regardless of race, gender, or socioeconomic status. Using AI to predict maintenance needs in housing units, improving living conditions without infringing on residents' rights.SOC-NPO: Non-Profit Organizations
Non-Profit Organizations in the social services sector work to address various social issues, such as poverty, hunger, education, and healthcare. They operate based on charitable missions rather than profit motives. These organizations are accountable for using AI ethically to enhance their programs and services while upholding beneficiaries' rights. This includes ensuring inclusivity, protecting data privacy, and avoiding biases that could disadvantage certain groups. Examples include utilizing AI to optimize fundraising efforts, targeting campaigns effectively without exploiting donor data. Implementing AI-driven tools to evaluate program effectiveness, informing improvements while respecting the privacy of those served.SOC-SVC: Social Services
Social Services encompass a range of government-provided services aimed at supporting individuals and families in need. This includes financial assistance, disability services, elderly care, and employment support. These services are accountable for using AI ethically to deliver support efficiently while ensuring fairness and protecting clients' rights. They must prevent biases in eligibility assessments, safeguard personal information, and ensure that AI enhances rather than hinders access to services. Examples include using AI to process applications for assistance more quickly, reducing wait times while ensuring that eligibility criteria are applied consistently and fairly. Employing AI chatbots to provide information and guidance to applicants, improving accessibility while maintaining confidentiality.SOC-WEL: Welfare Agencies
Welfare Agencies are government bodies that administer public assistance programs to support the economically disadvantaged. They provide services such as income support, food assistance, and healthcare subsidies. These agencies are accountable for using AI ethically to manage welfare programs effectively while upholding the rights and dignity of beneficiaries. This involves preventing errors or biases that could lead to wrongful denial of benefits, protecting sensitive data, and ensuring transparency. Examples include implementing AI systems to detect and prevent fraud in welfare programs without unjustly targeting or penalizing legitimate beneficiaries. Using AI analytics to identify trends and needs within the population served, informing policy decisions while safeguarding individual privacy. Summar By embracing ethical AI practices, each of these sectors can significantly contribute to the prevention of human rights abuses and the advancement of human rights in social services and housing. Their accountability lies in the responsible development, deployment, and oversight of AI technologies to enhance support for vulnerable populations, promote fairness and inclusivity, and ensure that the use of AI respects the rights, dignity, and privacy of all individuals.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
AI’s Potential Violations #
[Insert 300- to 500-word analysis of how AI could violate this human right.]
AI’s Potential Benefits #
[Insert 300- to 500-word analysis of how AI could advance this human right.]
Human Rights Instruments #
International Covenant on Civil and Political Rights (1966) #
G.A. Res. 2200A (XXI), International Covenant on Civil and Political Rights, U.N. Doc. A/6316 (1966), 999 U.N.T.S. 171 (Dec. 16, 1966)
Article 27
In those States in which ethnic, religious or linguistic minorities exist, persons belonging to such minorities shall not be denied the right, in community with the other members of their group, to enjoy their own culture, to profess and practise their own religion, or to use their own language.
Universal Declaration of Human Rights (2007) #
G.A. Res. 61/295, United Nations Declaration on the Rights of Indigenous Peoples, U.N. Doc. A/RES/61/295 (Sept. 13, 2007)
Last Updated: March 7, 2025
Research Assistant: Aarianna Aughtry
Contributor: To Be Determined
Reviewer: To Be Determined
Editor: Alexander Kriebitz
Subject: Human Right
Edition: Edition 1.0 Research
Recommended Citation: "II.D. Rights of Indigenous Peoples, Edition 1.0 Research." In AI & Human Rights Index, edited by Nathan C. Walker, Dirk Brand, Caitlin Corrigan, Georgina Curto Rex, Alexander Kriebitz, John Maldonado, Kanshukan Rajaratnam, and Tanya de Villiers-Botha. New York: All Tech is Human; Camden, NJ: AI Ethics Lab at Rutgers University, 2025. Accessed April 22, 2025. https://aiethicslab.rutgers.edu/Docs/ii-d-indigenous/.