United States of America: Announced Voluntary Commitments from Artificial Intelligence Companies to Manage AI Risks

Compare with different regulatory event:

Description

Announced Voluntary Commitments from Artificial Intelligence Companies to Manage AI Risks

On 21 July 2023, the White House announced that Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI had adopted voluntary commitments to ensure the safe, secure, and transparent development of artificial intelligence (AI) technology. In particular, the companies committed to ensuring their products are safe before introducing them to the public by testing their AI systems to assess their potential biological, cybersecurity, and societal risks. In addition, the companies committed to building AI systems that put security first by safeguarding their models against cyber and insider threats. Finally, the companies committed to building the public’s trust by making it easy for users to tell whether audio and visual content is in its original form or has been altered or generated by AI. In this context, the Government also announced it is developing an executive order and pursuing bipartisan legislation for safe and secure AI development. The Government also announced it seeks to establish an international framework to govern the development and use of AI.

Original source

Scope

Policy Area
Other operating conditions
Policy Instrument
Testing requirement
Regulated Economic Activity
ML and AI development
Implementation Level
national
Government Branch
executive
Government Body
central government

Complete timeline of this policy change

Hide details
2023-07-21
adopted

On 21 July 2023, the White House announced that Amazon, Anthropic, Google, Inflection, Meta, Micros…

2023-09-12
adopted

On 12 September 2023, the White House announced that Adobe, Cohere, IBM, Nvidia, Palantir, Salesfor…