Subscribe to regular updates:

Share

Emerging Contours of AI Governance and the Three Layers of Regulatory Heterogeneity

A working paper to identify learning opportunities and risks of digital fragmentation

Report Image

The rapid adoption of artificial intelligence (AI) across jurisdictions and sectors has triggered work on robust governance frameworks globally. The broad, simultaneous advance of AI rules presents an opportunity and a risk. With many regulatory approaches emerging, governments have an opportunity to learn rapidly about effective tools to regulate Ais. At the same time, a hardening patchwork of AI rules risks a fragmentation of the global AI market into isolated regional blocs. This study leverages a novel dataset of eleven AI rulebooks to: 1) Map the scope and scale of current AI governance and link it to established OECD AI Principles. 2) Develop a framework to assess where and how diverging regulations might lead to digital fragmentation, offering insights into the coordination required to mitigate such risks.

Authors

Johannes Fritz, Tommaso Giardini

Date Published

15 May 2024

The rapid growth and transformative potential of artificial intelligence (AI) applications have led to their widespread adoption across various sectors, from healthcare and finance to transportation and entertainment. As AI technologies continue to advance and permeate our daily lives, governments worldwide have recognised the need to establish governance frameworks to ensure their safe, ethical, and responsible development and deployment. However, the rapid and simultaneous activity of rule-makers worldwide has raised concerns about the potential for splitting the global AI market into regional blocks (“digital fragmentation”).

This paper makes two primary contributions based on a novel, text-based dataset: First, we characterise the scope of emerging AI governance. Using evidence from eleven comprehensive AI rulebooks, we suggest that AI governance encompasses at least 10 existing policy areas as well as over 75 different regulatory requirements. Linking this evidence to the OECD Principles on Artificial Intelligence (“OECD AI Principles”) allows us to analyse the regulatory approaches for emerging sub-disciplines of AI governance such as AI safety or AI accountability rules. This comprehensive mapping of AI governance provides a valuable overview for policymakers, researchers, and industry stakeholders seeking to understand the complex landscape of AI regulation.

The second contribution of this paper is to create a three-layered framework for analysing the possible locations of digital fragmentation risk in emerging AI governance rules. Utilising a unique dataset built on the full text of eleven emerging AI rulebooks from seven jurisdictions, we shed light on the complexity of the AI governance space. We delineate three layers for analysing potential digital fragmentation risks from regulatory heterogeneity:

  • On the most aggregate layer, the “priority layer”, we show that regulatory heterogeneity occurs as governments prioritise different objectives with their AI regulation. For instance, some governments first focus on AI safety while others prioritise accountability. Different priorities imply different sets of regulatory tools that international firms need to comply with.
  • On the “requirement choice layer”, we document that governments are using substantively different regulatory requirements to achieve the same priority. For instance, some governments seek to achieve AI safety through human oversight requirements while others prioritise extensive risk assessment exercises before market entry.
  • On the most granular layer, the “requirement scope layer”, our analysis reveals how governments choose the same regulatory requirement for a shared priority but with varying scope and formulations, leading to discrepancies despite the proximity.

Our analysis highlights the need for AI rule-makers to coordinate domestically and internationally for two reasons. First, the flurry of international activity on new AI rules is a rare opportunity to learn from each other about the most effective regulatory tools for achieving a given public policy priority. Second, the vast complexity of AI governance requires immediate efforts to ensure international interoperability and build towards best practices, to avoid unintended digital fragmentation. The current state of cross-border data transfer conditions is the salient example for the fragmentation risks associated with limited interoperability of digital economy rules. Yet data flow regulations are only one of the ten policy areas included in AI governance. The three-layered framework presented in this paper can be a steppingstone to organise international learning and coordination.