On 4 January 2024, the National Institute of Standards and Technology (NIST) published Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2 E2023). The document was drafted in collaboration between government, academia, and industry, aiming to support the development of trustworthy artificial intelligence (AI) by providing an overview of attack techniques and methodologies that consider all types of AI systems. It also describes current mitigation strategies reported in the literature, acknowledging that these available defences currently lack robust assurances to mitigate risks fully. The document outlines measures that could be employed to protect against data breaches and attacks that could manipulate training data. The publication is part of NIST's broader effort to put its AI Risk Management Framework into practice and is intended to help AI developers and users understand the types of attacks they might expect and help them address them.
Original source