OECD AI Principles refers to a set of ethical guidelines developed by the Organisation for Economic Co-operation and Development (OECD) that promote the responsible development and use of artificial intelligence.
Adopted in May 2019 and amended in 2024, these principles outline a framework based on five key pillars: inclusive growth, sustainable development, and human well-being; respect for the rule of law, human rights, and democratic values (including fairness and privacy); transparency and explainability; robustness, security, and safety; and accountability. The guidelines require that AI systems are designed to enhance human capabilities while protecting privacy and reducing bias, ensuring that decisions remain under meaningful human oversight.
By mandating clear disclosure of data sources, decision-making processes, and risk management strategies, the OECD AI Principles aim to build public trust and establish global standards that hold developers, organizations, and policymakers accountable. These principles serve as a benchmark for creating AI systems that contribute positively to society and safeguard human rights.
Quotation from OECD AI Principles
1.1. Inclusive growth, sustainable development and well-being
Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, well-being, sustainable development and environmental sustainability.
1.2. Respect for the rule of law, human rights and democratic values, including fairness and privacy
a) AI actors should respect the rule of law, human rights, democratic and human-centred values throughout the AI system lifecycle. These include non-discrimination and equality, freedom, dignity, autonomy of individuals, privacy and data protection, diversity, fairness, social justice, and internationally recognised labour rights. This also includes addressing misinformation and disinformation amplified by AI, while respecting freedom of expression and other rights and freedoms protected by applicable international law.
b) To this end, AI actors should implement mechanisms and safeguards, such as capacity for human agency and oversight, including to address risks arising from uses outside of intended purpose, intentional misuse, or unintentional misuse in a manner appropriate to the context and consistent with the state of the art.
1.3. Transparency and explainability
AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art:
i. to foster a general understanding of AI systems, including their capabilities and limitations,
ii. to make stakeholders aware of their interactions with AI systems, including in the workplace,
iii. where feasible and useful, to provide plain and easy-to-understand information on the sources of data/input, factors, processes and/or logic that led to the prediction, content, recommendation or decision, to enable those affected by an AI system to understand the output, and,
iv. to provide information that enable those adversely affected by an AI system to challenge its output.
1.4. Robustness, security and safety
a) AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety and/or security risks.
b) Mechanisms should be in place, as appropriate, to ensure that if AI systems risk causing undue harm or exhibit undesired behaviour, they can be overridden, repaired, and/or decommissioned safely as needed.
c) Mechanisms should also, where technically feasible, be in place to bolster information integrity while ensuring respect for freedom of expression.
1.5. Accountability
a) AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of the art.
b) To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outputs and responses to inquiry, appropriate to the context and consistent with the state of the art.
c) AI actors, should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on an ongoing basis and adopt responsible business conduct to address risks related to AI systems, including, as appropriate, via co-operation between different AI actors, suppliers of AI knowledge and AI resources, AI system users, and other stakeholders. Risks include those related to harmful bias, human rights including safety, security, and privacy, as well as labour and intellectual property rights.
For Further Study
OECD. “OECD AI Principles.” OECD, May 2019, amended 2024.