On 15 April 2024, the National Security Agency (NSA) published a guide on Deploying AI Systems Securely. It encompasses a set of best practices aimed at improving the confidentiality, integrity, and availability of AI systems. These include collaborating with IT departments, clarifying roles and responsibilities, and establishing security boundaries. It recommends requesting threat models from AI system developers and considering security requirements in contracts. Additionally, it suggests strengthening deployment environment configurations, protecting deployment networks, continuously safeguarding AI systems, validating AI systems, securing APIs, monitoring model behaviour, and protecting model weights. Furthermore, the Cybersecurity Information Sheet advises enforcing strict access controls, conducting audits and penetration testing, implementing robust logging and monitoring, updating and patching regularly, and planning for high availability and disaster recovery. The guidelines are addressed to organisations deploying AI systems, particularly those in high-threat, high-value environments. The guide was drafted by NSA, the Australian Signals Directorate's Australian Cyber Security Centre (ACSC), the Canadian Centre for Cyber Security (CCCS), the Cybersecurity and Infrastructure Security Agency (CISA), Federal Bureau of Investigation (FBI), National Cyber Security Centre (NCSC-NZ) - New Zealand, National Cyber Security Centre (NCSC-UK) - United Kingdom, National Security Agency's Artificial Intelligence Security Center (AISC).
Original source