Lessons from a systematic analysis of 11 international rulebooks
The first finding of our AI mapping series shows how countries prioritise different OECD AI Principles.
Note: Updated on 25 June 2024 to reflect the update to the OECD AI Principles.
The global flurry of AI regulation presents both an opportunity and a challenge. The Digital Policy Alert (DPA) has tracked over 600 AI-related policy developments since 2021. On the one hand, the diversity of regulatory approaches could spur governments to learn from each other in a new regulatory field, leading to more effective AI regulation. On the other hand, there is a considerable risk of creating a fragmented regulatory landscape, reminiscent of current data transfer rules. Fortunately, this dichotomy has catalysed a notable willingness among governments to coordinate on AI rules. The problem governments face, though, is what exactly to coordinate on.
International alignment on AI rules demands abstraction, as evidenced by the widely recognised OECD AI Principles’ lack of prescriptive detail. The principles are high-level by design and advocate for AI technology that (1) promotes inclusive growth, (2) respects human rights and fairness, (3) ensures transparency and explainability, (4) maintains robustness and safety, and (5) enforces accountability. To effectively draw lessons from regulation abroad and promote interoperable AI regulation, governments need a high-resolution view of the regulatory landscape.
The DPA can now provide clarity on the intricacies of emerging AI rules, building on an unprecedented comparative analysis. Our team meticulously analysed 11 comprehensive AI rulebooks from Argentina, Brazil, Canada, China, the European Union, South Korea, and the United States. Paragraph by paragraph, we tagged every provision with our novel taxonomy of over 70 regulatory requirements. This rigorous, text-based analysis offers a comprehensive and detailed snapshot of the current state of emerging AI regulation, revealing both commonalities and disparities across borders. Moreover, we mapped each regulatory requirement into an OECD principle to investigate the high-level priorities of different governments.
The high-level comparison reveals how countries prioritise different OECD AI Principles. Accountability is a universally shared priority, commanding a significant share of AI rules across all jurisdictions. The EU AI Act devotes over 40 percent of its requirements on this principle, while in the United States over 30 percent of the requirements pursue accountability. Fairness and safety are also global priorities, albeit less salient than accountability. China dedicates over 30 percent of its requirements to fairness, surpassing all other jurisdictions. Safety is a common priority, to which approximately 20 percent of requirements are devoted across jurisdictions. Transparency is emphasised most strongly outside the three big economic powers, covering over 25 percent of requirements. Finally, inclusive growth is currently the least salient OECD principle, featured most prominently in the United States (slightly over 10 percent of requirements).
National differences in the prioritisation of the OECD principles are only the tip of the iceberg. Even in the pursuit of the same principle, governments employ different regulatory requirements. For example, to enhance transparency, some governments grant information rights, others demand public disclosure, and still others impose watermarking for AI-generated content. Going further, even when governments establish the same regulatory requirement, granular differences persist. For instance, different types of content must be watermarked in different jurisdictions.
Since divergence – at all levels of granularity – is rising, it is imperative to learn from alternative approaches and to counter unintended fragmentation through international coordination. To this end, the DPA provides: