United States of America: Announced Second Round of Voluntary Commitments from Companies developing Artificial Intelligence Systems to Manage Risks

Compare with different regulatory event:

Description

Announced Second Round of Voluntary Commitments from Companies developing Artificial Intelligence Systems to Manage Risks

On 12 September 2023, the White House announced that Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability had adopted voluntary commitments to ensure the safe, secure, and transparent development of artificial intelligence (AI) technology. In particular, this second round of companies committed to ensuring their products are safe before introducing them to the public by testing their AI systems to assess their potential biological, cybersecurity, and societal risks. In addition, the companies committed to building AI systems that put security first by safeguarding their models against cyber and inside threats. Finally, the companies committed to building the public's trust by making it easy for users to determine whether audio and visual content is in its original form or has been altered or generated by AI. In this context, the Government also announced it is developing an executive order and pursuing bipartisan legislation for safe and secure AI development. The Government also announced it had consulted with various other countries on developing the commitments.

Original source

Scope

Policy Area
Other operating conditions
Policy Instrument
Testing requirement
Regulated Economic Activity
ML and AI development
Implementation Level
national
Government Branch
executive
Government Body
central government

Complete timeline of this policy change

Hide details
2023-07-21
adopted

On 21 July 2023, the White House announced that Amazon, Anthropic, Google, Inflection, Meta, Micros…

2023-09-12
adopted

On 12 September 2023, the White House announced that Adobe, Cohere, IBM, Nvidia, Palantir, Salesfor…