On 22 October 2024, the US Government Accountability Office (GAO) released a report to Congressional requesters focusing on generative artificial intelligence (AI) training, development, and deployment considerations. The report highlights the rapid advancement of commercial generative AI, which is capable of producing diverse content, including text, images, audio, and video, requiring substantial datasets for effective training. It notes that one developer has achieved over 200 million weekly active users for its generative AI model. However, the report raises concerns regarding trust, safety, and privacy issues associated with training data and the potential for harmful outputs. The report outlines common practices that developers employ to ensure responsible AI deployment, such as conducting benchmark tests, establishing trust and safety policies, and forming multi-disciplinary teams to provide comprehensive oversight. Additionally, the report addresses security threats related to generative AI, including prompt injection attacks and data poisoning. These threats can undermine safety measures and manipulate model behaviour. Developers assert they are implementing various countermeasures, including red teaming and ongoing monitoring, to mitigate these risks. Techniques such as reinforcement learning and user education are also noted as effective strategies to combat data poisoning attacks and enhance the robustness of generative AI models.
Original source