Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.
[Insert statement of urgency and significance for why this right relates to AI.]
Sectors #
The contributors of the AI & Human Rights Index have identified the following sectors as responsible for both using AI to protect and advance this human right.
- BUS: Business Sectors
- GOV: Government and Public Sector
- LAW: Law and Legal Enforcement
- INTL: International Organizations and Relations
- REG: Regulatory and Oversight Bodies
- WORK: Employment and Labor
AI’s Potential Violations #
[Insert 300- to 500-word analysis of how AI could violate this human right.]
AI’s Potential Benefits #
[Insert 300- to 500-word analysis of how AI could advance this human right.]
Human Rights Instruments #
Guiding Principles on Business and Human Rights(2011) #
H.R.C. Res. 17/4, Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework, U.N. Doc. A/HRC/RES/17/4 (June 16, 2011)
HUMAN RIGHTS DUE DILIGENCE
17. In order to identify, prevent, mitigate and account for how they address their adverse human rights impacts, business enterprises should carry out human rights due diligence. The process should include assessing actual and potential human rights impacts, integrating and acting upon the findings, tracking responses, and communicating how impacts are addressed. Human rights due diligence:
(a) Should cover adverse human rights impacts that the business enterprise may cause or contribute to through its own activities, or which may be directly linked to its operations, products or services by its business relationships;
(b) Will vary in complexity with the size of the business enterprise, the risk of severe human rights impacts, and the nature and context of its operations;
(c) Should be ongoing, recognizing that the human rights risks may change over time as the business enterprise’s operations and operating context evolve.
Commentary
This Principle defines the Parameters
for human rights due diligence, while Principles 18 through 21 elaborate its essential components.In the context of artificial intelligence (AI), particularly in machine learning and deep learning, parameters refer to the internal variables of a model that are learned and optimized from the training data. These parameters are crucial to the model's ability to make predictions or decisions based on input data. By adjusting parameters during training, AI models minimize the difference between the predicted output and the actual output, allowing them to improve their performance over time. Key Aspects:Ethical Considerations:
- Model Complexity: The number of parameters in a model often determines its complexity. More parameters typically indicate a more complex model capable of capturing finer details in the data, but may also lead to challenges in managing and interpreting the model.
- Learning Process: During training, algorithms adjust parameters (such as weights and biases) to optimize the model's performance. These adjustments help the model map inputs to outputs with greater accuracy.
- Weights and Biases: In neural networks, parameters primarily consist of weights (which determine the strength of connections between neurons) and biases (which adjust the output of a neuron), both of which are adjusted during training to minimize prediction errors.
Challenges:
- Bias in Parameters: The values of parameters can reflect biases present in the training data. If the data is biased, the model may perpetuate or even amplify these biases in its predictions or decisions.
- Transparency and Interpretability: Models with a large number of parameters, such as deep learning models, can be difficult to interpret, making it challenging to understand how the model arrives at specific decisions or predictions, thus affecting transparency and accountability.
- Resource Intensity: Training models with vast numbers of parameters requires significant computational resources, which raises concerns about environmental sustainability and equitable access to AI technology.
Future Directions: The field of AI ethics is increasingly focused on how the design and training of models, including the setting and optimization of parameters, impact fairness, transparency, and accountability in AI systems. There is growing interest in developing more efficient models that maintain high performance with fewer parameters, reducing their environmental footprint and making AI more accessible. Ethical AI development is also emphasizing the importance of ensuring that parameter adjustments are fair, unbiased, and accountable.
- Overfitting: A model with too many parameters may overfit to the training data, learning noise and details that do not generalize well to new data. Overfitting leads to poor performance on unseen or real-world data.
- Resource and Energy Consumption: The training and deployment of models with large numbers of parameters require substantial computational power and energy, contributing to environmental concerns such as carbon emissions.
- Maintenance and Updating: Models with large numbers of parameters can be difficult to maintain and update, especially as the data they are trained on evolves over time.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Human rights risks are understood to be the business enterprise’s potential adverse human rights impacts.
Potential impacts should be addressed through prevention or mitigation, while actual impacts – those that have already occurred – should be a subject for remediation (Principle 22).
Human rights due diligence can be included within broader enterprise riskmanagement systems, provided that it goes beyond simply identifying and managing material risks to the company itself, to include risks to rights-holders.
Human rights due diligence should be initiated as early as possible in the development of a new activity or relationship, given that human rights risks can be increased or mitigated already at the stage of structuring contracts or other agreements, and may be inherited through mergers or acquisitions.
Where business enterprises have large numbers of entities in their value chains it may be unreasonably difficult to conduct due diligence for adverse human rights impacts across them all. If so, business enterprises should identify general areas where the risk of adverse human rights impacts is most significant, whether due to certain suppliers’ or clients’ operating context, the particular operations, products or services involved, or other relevant considerations, and prioritize these for human rights due diligence.
Questions of complicity may arise when a business enterprise contributes to, or is seen as contributing to, adverse human rights impacts caused by other parties. Complicity has both non-legal and legal meanings. As a nonlegal matter, business enterprises may be perceived as being “complicit” in the acts of another party where, for example, they are seen to benefit from an abuse committed by that party.
As a legal matter, most national jurisdictions prohibit complicity in the commission of a crime, and a number allow for criminal liability of business enterprises in such cases. Typically, civil actions can also be based on an enterprise’s alleged contribution to a harm, although these may not be framed in human rights terms. The weight of international criminal law jurisprudence indicates that the relevant standard for aiding and abetting is knowingly providing practical assistance or encouragement that has a substantial effect on the commission of a crime.
Conducting appropriate human rights due diligence should help business enterprises address the risk of legal claims against them by showing that they took every reasonable step to avoid involvement with an alleged human rights abuse. However, business enterprises conducting such due diligence should not assume that, by itself, this will automatically and fully absolve them from liability for causing or contributing to human rights abuses.
18. In order to gauge human rights risks, business enterprises should identify and assess any actual or potential adverse human rights impacts with which they may be involved either through their own activities or as a result of their business relationships. This process should:
(a) Draw on internal and/or independent external human rights expertise;
(b) Involve meaningful consultation with potentially affected groups and other relevant stakeholders, as appropriate to the size of the business enterprise and the nature and context of the operation.
Commentary
The initial step in conducting human rights due diligence is to identify and assess the nature of the actual and potential adverse human rights impacts with which a business enterprise may be involved. The purpose is to understand the specific impacts on specific people, given a specific context of operations. Typically this includes assessing the human rights context prior to a proposed business activity, where possible; identifying who may be affected; cataloguing the relevant human rights standards and issues; and projecting how the proposed activity and associated business relationships could have adverse human rights impacts on those identified.
In this process, business enterprises should pay special attention to any particular human rights impacts on individuals from groups or populations that may be at heightened risk of vulnerability or marginalization, and bear in mind the different risks that may be faced by women and men.
While processes for assessing human rights impacts can be incorporated within other processes such as risk assessments or environmental and social Impact Assessments
, they should include all internationally recognized human rights as a reference point, since enterprises may potentially impact virtually any of these rights.Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Because human rights situations are dynamic, assessments of human rights impacts should be undertaken at regular intervals: prior to a new activity or relationship; prior to major decisions or changes in the operation (e.g. market entry, product launch, policy change, or wider changes to the business); in response to or anticipation of changes in the operating environment (e.g. rising social tensions); and periodically throughout the life of an activity or relationship.
To enable business enterprises to assess their human rights impacts accurately, they should seek to understand the concerns of potentially affected stakeholders by consulting them directly in a manner that takes into account language and other potential barriers to effective engagement.
In situations where such consultation is not possible, business enterprises should consider reasonable alternatives such as consulting credible, independent expert resources, including human rights defenders and others from civil society.
The assessment of human rights impacts informs subsequent steps in the human rights due diligence process.
19. In order to prevent and mitigate adverse human rights impacts, business enterprises should integrate the findings from their Impact Assessments
across relevant internal functions and processes, and take appropriate action.Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.(a) Effective integration requires that:
(i) Responsibility for addressing such impacts is assigned to the appropriate level and function within the business enterprise;
(ii) Internal decision-making, budget allocations and oversight processes enable effective responses to such impacts.
(b) Appropriate action will vary according to:
(i) Whether the business enterprise causes or contributes to an adverse impact, or whether it is involved solely because the impact is directly linked to its operations, products or services by a business relationship;
(ii) The extent of its leverage in addressing the adverse impact.
Commentary
The horizontal integration across the business enterprise of specific findings from assessing human rights impacts can only be effective if its human rights policy commitment has been embedded into all relevant business functions. This is required to ensure that the assessment findings are properly understood, given due weight, and acted upon.
In assessing human rights impacts, business enterprises will have looked for both actual and potential adverse impacts. Potential impacts should be prevented or mitigated through the horizontal integration of findings across the business enterprise, while actual impacts—those that have already occurred – should be a subject for remediation (Principle 22).
Where a business enterprise causes or may cause an adverse human rights impact, it should take the necessary steps to cease or prevent the impact.
Where a business enterprise contributes or may contribute to an adverse human rights impact, it should take the necessary steps to cease or prevent its contribution and use its leverage to mitigate any remaining impact to the greatest extent possible. Leverage is considered to exist where the enterprise has the ability to effect change in the wrongful practices of an entity that causes a harm.
Where a business enterprise has not contributed to an adverse human rights impact, but that impact is nevertheless directly linked to its operations, products or services by its business relationship with another entity, the situation is more complex. Among the factors that will enter into the determination of the appropriate action in such situations are the enterprise’s leverage over the entity concerned, how crucial the relationship is to the enterprise, the severity of the abuse, and whether terminating the relationship with the entity itself would have adverse human rights consequences.
The more complex the situation and its implications for human rights, the stronger is the case for the enterprise to draw on independent expert advice in deciding how to respond.
If the business enterprise has leverage to prevent or mitigate the adverse impact, it should exercise it. And if it lacks leverage there may be ways for the enterprise to increase it. Leverage may be increased by, for example, offering capacity-building or other incentives to the related entity, or collaborating with other actors.
There are situations in which the enterprise lacks the leverage to prevent or mitigate adverse impacts and is unable to increase its leverage. Here, the enterprise should consider ending the relationship, taking into account credible assessments of potential adverse human rights impacts of doing so.
Where the relationship is “crucial” to the enterprise, ending it raises further challenges. A relationship could be deemed as crucial if it provides a product or service that is essential to the enterprise’s business, and for which no reasonable alternative source exists. Here the severity of the adverse human rights impact must also be considered: the more severe the abuse, the more quickly the enterprise will need to see change before it takes a decision on whether it should end the relationship. In any case, for as long as the abuse continues and the enterprise remains in the relationship, it should be able to demonstrate its own ongoing efforts to mitigate the impact and be prepared to accept any consequences – reputational, financial or legal – of the continuing connection
20. In order to verify whether adverse human rights impacts are being addressed, business enterprises should track the effectiveness of their response. Tracking should:
(a) Be based on appropriate qualitative and quantitative indicators;
(b) Draw on feedback from both internal and external sources, including affected stakeholders.
Commentary
Tracking is necessary in order for a business enterprise to know if its human rights policies are being implemented optimally, whether it has responded effectively to the identified human rights impacts, and to drive continuous improvement.
Business enterprises should make particular efforts to track the effectiveness of their responses to impacts on individuals from groups or populations that may be at heightened risk of vulnerability or marginalization.
Tracking should be integrated into relevant internal reporting processes.
Business enterprises might employ tools they already use in relation to other issues. This could include performance contracts and reviews as well as surveys and audits, using gender-disaggregated Data
where relevant.Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Operational-level grievance mechanisms can also provide important feedback on the effectiveness of the business enterprise’s human rights due diligence from those directly affected (see Principle 29)
21. In order to account for how they address their human rights impacts, business enterprises should be prepared to communicate this externally, particularly when concerns are raised by or on behalf of affected stakeholders. Business enterprises whose operations or operating contexts pose risks of severe human rights impacts should report formally on how they address them. In all instances, communications should:
(a) Be of a form and frequency that reflect an enterprise’s human rights impacts and that are accessible to its intended audiences;
(b) Provide information that is sufficient to evaluate the adequacy of an enterprise’s response to the particular human rights impact involved;
(c) In turn not pose risks to affected stakeholders, personnel or to legitimate requirements of commercial confidentiality.
Commentary
The responsibility to respect human rights requires that business enterprises have in place policies and processes through which they can both know and show that they respect human rights in practice. Showing involves communication, providing a measure of Transparency
and AccountabilityTransparency in artificial intelligence (AI) is the principle that AI systems should be designed, developed, and deployed in ways that allow stakeholders to understand, oversee, and assess their operations. It is foundational for building trust and accountability, enabling developers, regulators, users, and the public to scrutinize AI systems. Transparency includes disclosing critical information about data sources, algorithms, decision-making processes, and potential impacts. By ensuring AI systems are not "black boxes," transparency promotes ethical practices and makes their functionality and objectives clear. Transparency applies across the AI lifecycle, from development to deployment and ongoing monitoring. It involves disclosing when AI is used, explaining the evidence base for its decisions, and identifying limitations or risks. For example, a transparent AI system in healthcare should detail its diagnostic processes, including the datasets and models that inform its recommendations. Transparency is vital for addressing governance challenges posed by AI’s complexity, such as notifying individuals when they are interacting with AI or subject to automated decisions. This principle safeguards rights and prevents harm by enabling scrutiny of whether AI decisions are biased, discriminatory, or flawed. Implementing transparency requires balancing competing considerations, such as protecting privacy and intellectual property. It also involves overcoming technical challenges to ensure transparency measures are meaningful and accessible to diverse stakeholders. For instance, overly technical disclosures may confuse users and undermine the usability of transparency mechanisms. Transparency is closely linked to ethical principles like explainability, accountability, and fairness. It supports the identification and mitigation of biases, fosters public trust, and ensures AI systems align with societal values and legal standards. Ultimately, transparency is both an ethical imperative and a practical necessity for responsible AI. By embedding transparency into AI design, organizations can build trust, enhance accountability, and promote the equitable and ethical use of AI technologies. Recommended Reading Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. "Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI ." Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020.
Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Accountability refers to the professional and legal standards determining who is responsible and legally liable for artificial intelligence throughout its lifecycle—from design and development to deployment, use, and monitoring. Accountability is a foundational AI ethic that measures the efficacy of how technologies uphold human values, professional codes of practice, civil and criminal laws, and human rights across all sectors of society. Accountability practices establish remedies when harm occurs, whether caused by intent to harm, negligent action (failure to take reasonable care to prevent harm), reckless disregard (ignoring known risks), or evidence of harmful effects. When assigning accountability, the focus should be on asking, “Who are the humans and organizations responsible?” rather than AI technology itself. Accountability should be distributed among the relevant parties in correct proportionality, such as companies, investors, researchers, developers, manufacturers, users, and regulators. This mutually dependent approach ensures that all relevant parties take shared responsibility for an AI system’s impact by mitigating harm, embodying the promise to “do no harm”—nonmaleficence.to individuals or groups who may be impacted and to other relevant stakeholders, including investors.
For Further Reading Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. "Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI." Berkman Klein Center for Internet & Society at Harvard University, Research Publication No. 2020-1, January 15, 2020.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Communication can take a variety of forms, including in-person meetings, online dialogues, consultation with affected stakeholders, and formal public reports. Formal reporting is itself evolving, from traditional annual reports and corporate responsibility/Sustainability
reports, to include online updates and integrated financial and non-financial reports.Sustainability, in the context of AI ethics and law, is the principle that artificial intelligence should be designed, developed, and deployed to protect the environment, promote ecological balance, and contribute to societal well-being over the long term. This principle emphasizes minimizing AI’s ecological footprint, enhancing energy efficiency, and creating systems that remain effective and relevant over time. Beyond environmental concerns, sustainability addresses broader social impacts, such as fostering equity, reducing systemic inequities, and promoting peace and stability. Achieving sustainability in AI requires intentional efforts at every stage of an AI system's lifecycle. Technically, this involves adopting energy-efficient algorithms, reducing resource consumption, and using sustainable data processing methods. Organizations are encouraged to align AI practices with global sustainability frameworks, such as the United Nations’ Sustainable Development Goals (SDGs), to ensure that AI technologies contribute positively to ecosystems and biodiversity. On a societal level, corporations are urged to mitigate potential disruptions caused by AI, such as job displacement, while leveraging these challenges to drive innovation and create equitable solutions that benefit all. Governance is critical for embedding sustainability into AI practices. Transparent reporting on AI’s energy consumption, resource usage, and societal impacts can build public trust and drive responsible innovation. Accountability frameworks should hold developers, organizations, and policymakers responsible for minimizing environmental harm and promoting social equity. By prioritizing sustainability, stakeholders can ensure that AI technologies address present needs while safeguarding ecological preservation, societal well-being, and intergenerational responsibility. Sustainability highlights the potential of AI to foster a more harmonious and resilient future. By embedding this principle into AI’s ethical foundation, developers and policymakers can create technologies that align with global values, support collective well-being, and contribute to a thriving, equitable planet. Recommended Reading Anna Jobin, Marcello Ienca, and Effy Vayena. "The Global Landscape of AI Ethics Guidelines." Nature Machine Intelligence 1 (2019): 389–399.
Disclaimer: Our global network of contributors to the AI & Human Rights Index is currently writing these articles and glossary entries. This particular page is currently in the recruitment and research stage. Please return later to see where this page is in the editorial workflow. Thank you! We look forward to learning with and from you.Formal reporting by enterprises is expected where risks of severe human rights impacts exist, whether this is due to the nature of the business operations or operating contexts. The reporting should cover topics and indicators concerning how enterprises identify and address adverse impacts on human rights. Independent verification of human rights reporting can strengthen its content and credibility. Sector-specific indicators can provide helpful additional detail.
Last Updated: April 17, 2025
Research Assistant: Laiba Mehmood
Contributor: To Be Determined
Reviewer: Laiba Mehmood
Editor: Caitlin Corrigan
Subject: Human Right
Edition: Edition 1.0 Research
Recommended Citation: "XI.M. Human Rights Due Diligence, Edition 1.0 Research." In AI & Human Rights Index, edited by Nathan C. Walker, Dirk Brand, Caitlin Corrigan, Georgina Curto Rex, Alexander Kriebitz, John Maldonado, Kanshukan Rajaratnam, and Tanya de Villiers-Botha. New York: All Tech is Human; Camden, NJ: AI Ethics Lab at Rutgers University, 2025. Accessed April 22, 2025. https://aiethicslab.rutgers.edu/Docs/xi-m-due-diligence/.