The second finding of our AI series shows how governments draw from ten different policy areas to establish AI rules.
Note: Updated on 25 June 2024 to reflect the update to the OECD AI Principles.
Governments draw from ten different policy areas to establish AI rules and impose different requirements to operationalise each OECD AI principle. Today, governments have a unique opportunity to learn from diverse regulatory approaches and avoid fragmentation risk.
Our comparative analysis of eleven AI rulebooks reveals that AI rules are not a single, delineated policy area, but rather draw from almost a dozen existing ones. Over half of the total requirements (744) concern either regulatory compliance and transparency, or design and testing standards. Less frequently used policy areas include consumer protection, data governance, and content moderation. The diversity of AI rules is aligned with the over 600 AI regulation and enforcement developments documented by the Digital Policy Alert since January 2021. AI rules are diverse because AI is a multifaceted technology. Data governance rules regulate the data with which AI is trained and protect each AI user’s privacy. Content moderation rules set guardrails for AI-generated output. Transparency rules address the opacity of AI systems. While these policy areas all pursue legitimate objectives, their interplay complicates international alignment.
When governments operationalise the OECD AI Principles, they combine regulatory requirements from different policy areas. To implement the principle of human rights and fairness (1.2) as well as safety (1.4), governments draw from six policy areas. The principle of transparency and explainability (1.3) is implemented through rules regarding regulatory compliance and transparency, consumer protection, and content moderation. The rules implementing the other principles span across at least four policy areas.
In turn, several policy areas implement multiple OECD AI Principles. For instance, regulatory compliance and transparency are relevant to all five principles. Design and testing standards as well as consumer protection are pertinent to the implementation of four principles. Content moderation and data governance are pertinent to the implementation of two principles. Other policy areas implement only one principle, namely competition, intellectual property, and labour law.
The diversity of AI rules creates risk for divergence in the implementation of the OECD Principles on three layers.
For example, multidimensional divergence is visible in how governments implement the principle of respect for the rule of law, human rights and democratic values (1.2).
Multidimensional divergence, across the OECD AI Principles, is evidenced by how rarely a single regulatory requirement is used across borders. Our comparative analysis found 74 different regulatory requirements, applied a total of 744 times across the seven studied jurisdictions. Only three requirements – regarding data protection, non-discrimination, and the disclosure of technical documentation about the AI system – are featured in all the jurisdictions we studied. In contrast, over a third of all regulatory requirements are foreseen in only one or two jurisdictions.
Governments working towards international alignment on AI rules face a unique opportunity. The diversity of AI rules enables governments to learn from both previous experience and each other. Governments have a unique opportunity to draw from their experience in other policy areas, including the expertise accumulated by national regulators. In addition, governments are currently experimenting to find effective AI rules. Studying and comparing different approaches to operationalising the OECD AI Principles is an opportunity for rapid learning.
The urgency for international alignment on AI rules is underestimated: Multidimensional divergence on AI rules can amplify digital fragmentation risk. Currently, the global digital economy is struggling with different national rules regarding data transfers. Concerning AI, such differences multiply since they can occur within each pertinent policy area. It is imperative that governments study different regulatory approaches to start formulating best practices and to avoid fragmentation risks from AI rules.
When governments pursue the coordination of AI rules, they should approach it like a game of multidimensional chess:
Understand how the pieces move, by knowing the relevant policy areas in AI rules and their singularities.
Be aware of all the dimensions, by differentiating between the high-level OECD principles and the granular requirements that implement them in national AI rules.
Know their counterparts, by studying and learning from national regulatory approaches.
To enable the proper preparation of this complex chess game, the DPA provides:
An analytical series synthesising our findings on two further levels:
OECD Principle level: Which requirements are used to implement each principle?
Requirement level: What are the differences within the requirements that implement the same principle?
CLaiRK: A suite of public tools to analyse global AI rules to:
Navigate each AI rulebook with our tagging of requirements and OECD principles;
Compare different rulebooks with chromatic highlighting; and
Explore the state of AI regulation using our high-accuracy chat.