On 31 January 2025, the Department for Science, Innovation and Technology (DSIT) issued the AI Cyber Security Code of Practice. The Code of Practice is comprised of 13 Principles to identify and mitigate potential vulnerabilities in AI systems. This includes security testing such as vulnerability scanning, penetration testing, and adversarial testing to identify potential weaknesses in AI models. Threat modelling must be conducted to assess the risks associated with the design, data handling and operation of AI systems, and to identify potential attack vectors. Regular risk assessments are required to evaluate emerging threats and verify compliance with security and privacy regulations. Continuous monitoring and auditing are essential to detect vulnerabilities and maintain integrity, ensuring that no unauthorised changes are made after deployment. In addition, AI systems should be tested for resilience to adversarial attacks.
Original source