Subscribe to regular updates:


Governments use different policy areas for each OECD AI Principle

Report Image

The second finding of our AI series shows how governments draw from ten different policy areas to establish AI rules.


Johannes Fritz, Tommaso Giardini

Date Published

03 May 2024

Note: Updated on 25 June 2024 to reflect the update to the OECD AI Principles.

Governments draw from ten different policy areas to establish AI rules and impose different requirements to operationalise each OECD AI principle. Today, governments have a unique opportunity to learn from diverse regulatory approaches and avoid fragmentation risk.

AI rules draw from diverse policy areas

Our comparative analysis of eleven AI rulebooks reveals that AI rules are not a single, delineated policy area, but rather draw from almost a dozen existing ones. Over half of the total requirements (744) concern either regulatory compliance and transparency, or design and testing standards. Less frequently used policy areas include consumer protection, data governance, and content moderation. The diversity of AI rules is aligned with the over 600 AI regulation and enforcement developments documented by the Digital Policy Alert since January 2021. AI rules are diverse because AI is a multifaceted technology. Data governance rules regulate the data with which AI is trained and protect each AI user’s privacy. Content moderation rules set guardrails for AI-generated output. Transparency rules address the opacity of AI systems. While these policy areas all pursue legitimate objectives, their interplay complicates international alignment.

Multiple policy areas intersect within each OECD AI Principle

When governments operationalise the OECD AI Principles, they combine regulatory requirements from different policy areas. To implement the principle of human rights and fairness (1.2) as well as safety (1.4), governments draw from six policy areas. The principle of transparency and explainability (1.3) is implemented through rules regarding regulatory compliance and transparency, consumer protection, and content moderation. The rules implementing the other principles span across at least four policy areas.

In turn, several policy areas implement multiple OECD AI Principles. For instance, regulatory compliance and transparency are relevant to all five principles. Design and testing standards as well as consumer protection are pertinent to the implementation of four principles. Content moderation and data governance are pertinent to the implementation of two principles. Other policy areas implement only one principle, namely competition, intellectual property, and labour law.

AI rules create a risk of multidimensional divergence

The diversity of AI rules creates risk for divergence in the implementation of the OECD Principles on three layers.

  • Governments prioritise the OECD AI Principles differently, as demonstrated in our first piece.
  • When implementing the same principle, governments focus on different policy areas.
  • When using the same policy area to implement the same principle, governments impose different requirements.

For example, multidimensional divergence is visible in how governments implement the principle of respect for the rule of law, human rights and democratic values (1.2).

  • China and the United States emphasise this OECD AI Principle more than other governments.
  • Some governments establish rules regarding data governance, such as data protection requirements. Other governments demand consumer protection, for example through non-discrimination obligations.
  • Even within these policy areas, a patchwork of divergent requirements emerges. Within data protection, some governments establish data subject rights while others focus on data security requirements. Within non-discrimination, some governments establish rights to contest discriminatory AI output, while others impose prohibitions on discriminatory AI systems.

Multidimensional divergence, across the OECD AI Principles, is evidenced by how rarely a single regulatory requirement is used across borders. Our comparative analysis found 74 different regulatory requirements, applied a total of 744 times across the seven studied jurisdictions. Only three requirements – regarding data protection, non-discrimination, and the disclosure of technical documentation about the AI system – are featured in all the jurisdictions we studied. In contrast, over a third of all regulatory requirements are foreseen in only one or two jurisdictions.

The opportunity to coordinate AI rules resembles a multidimensional chess game

Governments working towards international alignment on AI rules face a unique opportunity. The diversity of AI rules enables governments to learn from both previous experience and each other. Governments have a unique opportunity to draw from their experience in other policy areas, including the expertise accumulated by national regulators. In addition, governments are currently experimenting to find effective AI rules. Studying and comparing different approaches to operationalising the OECD AI Principles is an opportunity for rapid learning. 

The urgency for international alignment on AI rules is underestimated: Multidimensional divergence on AI rules can amplify digital fragmentation risk. Currently, the global digital economy is struggling with different national rules regarding data transfers. Concerning AI, such differences multiply since they can occur within each pertinent policy area. It is imperative that governments study different regulatory approaches to start formulating best practices and to avoid fragmentation risks from AI rules. 


When governments pursue the coordination of AI rules, they should approach it like a game of multidimensional chess:

  • Understand how the pieces move, by knowing the relevant policy areas in AI rules and their singularities.

  • Be aware of all the dimensions, by differentiating between the high-level OECD principles and the granular requirements that implement them in national AI rules.

  • Know their counterparts, by studying and learning from national regulatory approaches.


To enable the proper preparation of this complex chess game, the DPA provides:

  • An analytical series synthesising our findings on two further levels: 

    • OECD Principle level: Which requirements are used to implement each principle?

    • Requirement level: What are the differences within the requirements that implement the same principle?

  • CLaiRK: A suite of public tools to analyse global AI rules to:

    • Navigate each AI rulebook with our tagging of requirements and OECD principles;

    • Compare different rulebooks with chromatic highlighting; and 

    • Explore the state of AI regulation using our high-accuracy chat.