Share

Regulatory activity around AI picks up worldwide

When ChatGPT took the world by storm, global regulators were quick to respond to the potentially transformative technology. With a surge in regulatory activity, international coordination would be essential to ensure interoperability and avoid the retrofitting of regulatory environments

Authors

Johannes Fritz, Danielle Koh

Published

29 Aug 2023

Report Image

When ChatGPT took the world by storm, global regulators were quick to respond to the potentially transformative technology. According to evidence from the Digital Policy Alert (DPA) database, regulatory activity in terms of binding laws and regulations picked up almost immediately after ChatGPT’s public release. Today, the DPA team tracks more than 140 active regulatory developments including laws and regulations, guidance as well as enforcement action in the G20, Europe and Switzerland. With a surge in regulatory activity, international coordination would be essential to ensure interoperability and avoid the retrofitting of regulatory environments – a trend that we currently observe in data protection and cross-border transfer regulations.

Jurisdictions worldwide are advancing at different speeds

The Digital Policy Alert tracks active developments in AI regulations for more than 40 countries. To date, only Brazil, Canada, the European Union (EU), and South Korea have comprehensive AI regulatory frameworks advancing through their legislatures. Peru recently adopted a law on promoting the use and development of AI systems in line with an enumerated set of principles. China has arguably advanced furthest through three vertical regulations, and a single framework law may be drafted later this year. After regulating recommendation algorithms in March 2022, regulations on deep synthesis algorithms came into force in January 2023. More recently, China’s interim measures on generative AI came into force on 15 August 2023 after 4 months of deliberation. A draft regulation on facial recognition systems is currently undergoing public consultations. The United States adopted non-binding AI frameworks, including the White House AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework, rather than comprehensive AI legislation.

In enforcement, following Italy’s lead, six further jurisdictions have opened investigations against OpenAI for possible violations of data protection regulations since April 2023. Furthermore, the Ibero-American Network for the Protection of Personal Data (RIPD) adopted coordinated measures in the investigation of OpenAI in several Latin American jurisdictions. In the area of facial recognition, seven jurisdictions have opened investigations into Clearview AI for its alleged lack of compliance in its data collection and processing practices. France’s data protection authority fined Clearview AI EUR 20 million, the highest penalty under the EU General Data Protection Regulation (GDPR). Italy further investigated Luka for privacy concerns and banned its chatbot Replika from the processing of personal data of Italian residents. In the first privacy case related to AI, South Korea fined Scatterlab for data protection violations of its chatbot Iruda back in 2021. 

Generative AI, user rights and misinformation in early spotlight

In terms of laws and regulations, most activity is cross-cutting i.e., agnostic to the type of AI system in use. Of AI system-specific laws and regulations, generative AI is singled out as the most commonly regulated (36 developments). For these, the focus is on the generative AI systems’ use of personal information in training data (input) and system responses (output). To ensure user privacy, regulatory proposals are made in line with existing data protection and privacy rights, including the rights to deletion, rectification, and user redressal. Various user consent and notification requirements are common features of this emerging AI regulation. Furthermore, regulators globally are grappling with the implications of generative AI for intellectual property law, calling into question whether copyrighted works may be used in training data and whether an AI system can be listed as a creator (see our first briefing in this series). 

At the heart of regulating biometric identification and candidate scoring AI systems is the objective to protect data subject rights under the principle of non-discrimination. This includes the design and deployment of identification systems in a manner that avoids bias or unfair treatment based on characteristics such as race, gender, age, or disability. Regulatory proposals include ensuring diversity and representation in training data through various reporting and audit assessment tools, and requirements to enhance the transparency and explainability of the scoring or the identification process through public disclosure, user notification requirements, and the right to human intervention in AI decision-making. Six regulatory developments specifically address the protection of vulnerable groups, for example, minors or those with disabilities. The majority of activities addressing non-discrimination in candidate scoring are embedded under state laws in the US, namely in New York, Illinois, and the District of Columbia. The UK is the only jurisdiction with a dedicated draft Bill regulating the use of AI in technologies in the workplace, which is currently moving through its legislature. France’s data protection authority CNIL issued an opinion highlighting the risks of infringing personal privacy through the use of augmented video surveillance in public spaces, which is also included in its Action Plan on AI adopted in May 2023. 

Challenges with regulating deep synthesis overlap with that of generative AI, where deceptive generated content such as deep fakes or fabricated information could be misused or abused to manipulate public opinion. The eleven tracked policy or regulatory activities aim to prevent disinformation or misinformation in AI systems. The focus of regulatory tools and proposals aim to prevent misinformation or disinformation through labelling requirements or digital watermarks, legal liability requirements and user redressal, as well as subject rights in the use of personal information in training data.

Regulatory alignment on principles at least?

While there is no commonly agreed approach to regulating AI, a set of general principles has emerged that could harmonise regulatory objectives and facilitate global cooperation on AI. The most commonly used are the OECD’s AI principles and its framework for classifying AI systems[1], namely (i) inclusive growth, sustainable development and well-being, (ii) human-centred values and fairness, (iii) transparency and explainability; (iv) robustness, security and safety, and accountability. 

Most of the tracked regulations acknowledged some, if not all, the OECD principles in formulating policy. A similar number of jurisdictions have accepted every principle. The most commonly evoked principle is human-centred fairness, which includes values such as the protection of data subject rights and privacy. The creation of robust, safe, and secure systems also ranks highly, addressing misuse and systemic risks with regulatory tools such as impact assessments, technical guidance, or various cybersecurity requirements. The principle of transparency and explainability is evoked in regulatory instruments such as public disclosure and user notification requirements.

The need for a coordinated approach

Regulating AI and its associated risks, especially at its nascent stage of development, will set the standards and direction for the technology’s future development. A coordinated approach to AI regulation is necessary to establish global standards and ensure the responsible, ethical design and deployment of AI systems. 

Efforts towards international cooperation have begun, with initiatives such as the G7 Action Plan focused on promoting global interoperability for trustworthy AI. There are also various bilateral agreements among different countries to expand cooperation on AI that address different aspects of AI cooperation. For example, the UK-US Atlantic Declaration frames AI cooperation as maintaining global leadership in a “critical and emerging” technology, while Singapore and the US have agreed to collaborate on the interoperability of AI regulatory frameworks. In addition, Initiatives such as the Global Partnership on AI and the OECD Network of AI Experts help to drive R&D and build Track II networks for information sharing. These efforts aim to harmonise standards and facilitate knowledge exchanges in order to effectively govern and navigate the fast-evolving AI landscape in a manner that ensures accountability, mitigates risk, and builds public trust. 

1

General principles on AI are also present in UNESCO’s Recommendation on the Ethics of AI, the Council of Europe’s Convention on AI and the G20’s non-binding AI Principles.