As organizations increasingly integrate artificial intelligence into their cybersecurity operations, NIST has launched extensive initiatives to address the emerging challenges of AI security risk management. The organization’s thorough approach was highlighted during their Cybersecurity and AI Profile Workshop at the National Cybersecurity Center of Excellence on April 3, 2025, where stakeholders gathered to develop integrated profiles for both the NIST Cybersecurity Framework and AI Risk Management Framework.
NIST’s March 2025 publication, “AI 100-2 E2025, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” represents a significant step toward standardizing the language and understanding of AI security threats. The taxonomy provides organizations with a structured approach to identifying and categorizing attacks based on system type, lifecycle phase, and attacker characteristics, enabling more effective risk assessment and response planning. Organizations can now better understand AI threats through this comprehensive taxonomy design.
NIST’s taxonomy empowers organizations to systematically classify AI security threats, strengthening their defensive capabilities through standardized threat assessment protocols.
The vulnerabilities inherent in both predictive and generative AI systems have become increasingly apparent, with threats targeting every stage of the machine learning lifecycle. These concerns prompted NIST to release a Joint Cybersecurity Information Sheet on AI Data Security on May 22, 2025, offering specific guidance on protecting AI-related data throughout its lifecycle.
The document addresses the unique security challenges posed by AI systems, particularly when deployed in environments with sensitive data access.
NIST’s framework development reflects the understanding that traditional cybersecurity approaches must evolve to address AI-specific threats. Their mitigation strategies framework acknowledges current defensive limitations while providing guidance for future standards development.
This approach highlights the need for context-specific security measures that consider both technical and operational factors in AI deployment.
The integration of AI security into broader risk management frameworks demonstrates NIST’s recognition that AI security cannot be treated as a separate domain from general cybersecurity practices. Through these initiatives, NIST aims to establish a unified approach to managing AI-related security risks, ensuring that organizations can maintain strong security postures during the utilization of AI technology.