On 12 September 2023, the White House announced that Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability had adopted voluntary commitments to ensure the safe, secure, and transparent development of artificial intelligence (AI) technology. In particular, this second round of companies committed to ensuring their products are safe before introducing them to the public by testing their AI systems to assess their potential biological, cybersecurity, and societal risks. In addition, the companies committed to building AI systems that put security first by safeguarding their models against cyber and inside threats. Finally, the companies committed to building the public's trust by making it easy for users to determine whether audio and visual content is in its original form or has been altered or generated by AI. In this context, the Government also announced it is developing an executive order and pursuing bipartisan legislation for safe and secure AI development. The Government also announced it had consulted with various other countries on developing the commitments.
Original source