On 24 March 2025, the National Institute of Standards and Technology (NIST) published an updated version of Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2 E2025). This provides an update from the initial 2023 version (NIST.AI.100-2 E2023). The document introduces a taxonomy that categorises attacks on predictive and generative AI based on the stage of the machine learning lifecycle, attacker objectives, capabilities, and knowledge. It addresses common threats such as evasion, poisoning, and privacy breaches, and outlines corresponding mitigation strategies. The report also includes a glossary to promote a shared understanding of adversarial machine learning concepts and is intended to inform future standards, risk assessments, and best practices across the AI security landscape. The 2025 edition of NIST AI 100-2 expands on the 2023 version by incorporating new attack types and refinements across both predictive and generative AI systems, particularly addressing emerging threats such as prompt injection, information leakage from user interactions, and training data compromise. It also provides a more granular taxonomy, introduces updated mitigation strategies, and aligns more closely with enterprise deployment pipelines and real-world use cases, reflecting the evolving adversarial machine learning landscape.
Original source