On 31 January 2025, the Department for Science, Innovation and Technology (DSIT) issued the AI Cyber Security Code of Practice. The Code of Practice is comprised of 13 Principles. It is recommended that continuous monitoring of AI systems be implemented to identify emerging security threats and vulnerabilities. Organisations are advised to use automated anomaly detection to identify unusual activities, such as adversarial attacks or data manipulation. The regular undertaking of security audits is instrumental in maintaining the integrity of AI systems and ensuring ongoing compliance with security standards. Log monitoring is recommended to track system activity, detect unauthorised access, and support accountability. Furthermore, the updating of systems based on new threat intelligence is encouraged to address evolving risks and ensure resilience.
Original source