On 31 January 2025, the Department for Science, Innovation and Technology (DSIT) issued the AI Cyber Security Code of Practice. The Code of Practice consists of 13 principles, including design requirements. These principles emphasise the necessity of designing AI systems with robust security features, securing supply chains and third-party components, and embedding security into system development. In addition, organisations are encouraged to implement strong access controls and conduct ethical risk assessments. In summary, systems must be designed to withstand adversarial attacks, unexpected inputs, and model failures by incorporating secure coding practices, input validation, and data sanitization. Furthermore, systems must be equipped with secure deployment processes, incorporating automated testing and monitoring, to identify vulnerabilities and mitigate risks throughout the AI lifecycle.
Original source