On 26 July 2024, the US National Institute of Standards and Technology (NIST) adopted the Guideline on the Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST SP 800-218A). The Guideline focuses on secure software development practices for generative Artificial Intelligence (AI) and dual-use foundation models. The Guideline is applicable to AI model producers, AI system producers, and AI system acquirers, addressing the entire AI model development lifecycle, including data sourcing, design, training, fine-tuning, evaluation, and integration into software systems. The Guideline provides recommendations and considerations, such as securing code storage, managing model versioning and lineage, and clarifying shared responsibilities among organisations, supplementing the Secure Software Development Framework version 1.1. The Guideline emphasises a risk-based approach to secure software development, and addressing cybersecurity risks while also acknowledging other risks like data privacy and bias. The Guideline is intended for immediate voluntary adoption and encourages organisations to tailor its recommendations to their specific needs.
Original source