Subscribe to regular updates:


The 2024 update to the OECD AI Principles

An analysis of the main changes and their impact on the OECD Principles' national implementation.

Report Image

The OECD AI Principles were updated in May 2024. Our analysis highlights the main changes and explains their impact on the OECD Principles' national implementation.


Tommaso Giardini, Johannes Fritz

Date Published

03 May 2024

The OECD AI Principles, a testament to international alignment on AI regulation, were updated in May 2024 to reflect technological advances. Below, we explain the changes and how they impact the findings of our comparative analysis of AI rules across the world. 

Understanding the OECD AI Principles

Established in 2019, the OECD AI principles serve as a common blueprint for policymakers and negotiators to address regulatory issues concerning AI – from transparency, to safety, to accountability. All 36 OECD member countries and eight non-member countries have endorsed the principles, which are non-binding to enable governments to tailor their implementation in national regulation. 

The OECD Principles outline five value-based principles and five recommendations for policymakers. Since the recommendations[1] are less tangible, we focus on the value-based principles, namely:

  • Inclusive growth, sustainable development and well-being
  • Respect for the rule of law, human rights and democratic values
  • Transparency and explainability
  • Robustness, security and safety
  • Accountability
What changed in May 2024?

In May 2024, the OECD announced an update to the principles to address emerging challenges brought by rapid technological advances, especially regarding generative AI. The OECD emphasised the focus on safety, privacy, intellectual property, and information integrity. Our comparative tool visualises how the update introduced new provisions, amended existing provisions, and relocated existing provisions.

The update introduced three new provisions. In principle 1.2 (Respect for the rule of law, human rights and democratic values), the update expanded the list of human-centred values to include “addressing misinformation and disinformation amplified by AI” while respecting freedom of expression. In principle 1.4 (Robustness, security and safety), the update added a related provision, calling for mechanisms to bolster information integrity while respecting freedom of expression. Finally, principle 1.4 was expanded regarding mechanisms to ensure that AI systems can be overridden, repaired, and/or decommissioned if they risk causing undue harm or exhibit undesired behaviour.

Amendments to existing provisions were substantial in principle 1.2 and minor in other principles. The update renamed principle 1.2 from “Human-centred values and fairness” to “Respect for the rule of law, human rights and democratic values.” In addition, the update moved “non-discrimination and equality” to the front of the list of values, replaced “human determination” with “human agency and oversight,” and specified that risks must be addressed also regarding uses outside of intended purpose and un-/intentional misuse. In addition, the update added “environmental sustainability” in principle 1.1, “security risks” in principle 1.4, and generally shifted the terminology from AI “output” to “outcome.”

Finally, three provisions were relocated between and within principles. The main shift was the relocation of the provisions on traceability and risk management from principle 1.4 (Robustness, security and safety) to principle 1.5 (Accountability). The provision on risk management was further complemented to emphasise cooperation between different AI stakeholders and specific risks, namely labour and intellectual property rights. Furthermore, in principle 1.3 (Transparency and explainability), the call for plain and easy-to-understand information on the factors and logic underlying an AI prediction, recommendation or decision now relates to challenging, rather than understanding, AI outcomes. 

How do the changes affect the principles’ operationalisation?

To reflect the update, we have adapted the matching of the DPA’s 74 regulatory requirements to the OECD principles. We now match data security and incident notification requirements to principle 1.4  (Robustness, security and safety). In turn, we moved risk-related requirements to principle 1.5 (Accountability), namely risk assessment, disclosure, management, monitoring, and notification requirements.

This adapted matching leads to one major shift in our findings: The amount of regulatory requirements that relate to principle 1.5 (Accountability) is substantially higher, rising by 8 percent to a total of 32 percent. The increase was strongest in Canada and the United States, where the share of regulatory requirements relating to accountability rose by 16 percent. This increase was compensated by a decrease in regulatory requirements matching to other principles, especially 1.4 (Robustness, Security, and Safety) with a decrease of 7 percent. This decrease was again strongest in Canada (19 percent decrease). These shifts can be explained by the overproportional amount of risk-related requirements, which were relocated from principle 1.4 to principle 1.5.


The OECD recommends to 1) invest in AI research and development, 2) foster an inclusive AI-enabling ecosystem, 3) shape an enabling interoperable AI policy environment, 4) build human capacity and prepare for the labour market transition, and 5) pursue international cooperation for trustworthy AI