With businesses going gaga for artificial intelligence, securing AI systems has become a key priority and a top challenge for cybersecurity teams, as they scramble to master this emerging and evolving field. In this special edition of the Cybersecurity Snapshot, we highlight some of the best practices and insights that experts have provided so far in 2025 for AI security.
In case you missed it, here’s fresh guidance for defending AI systems against cyber attacks.
If your organization is looking for recommendations on how to protect the sensitive data powering your AI systems, check out new best practices released in May by cyber agencies from Australia, New Zealand, the U.K. and the U.S.
“This guidance is intended primarily for organizations using AI systems in their operations, with a focus on protecting sensitive, proprietary or mission-critical data,” reads the document titled “AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems.”
“The principles outlined in this information sheet provide a robust foundation for securing AI data and ensuring the reliability and accuracy of AI-driven outcomes,” it adds.
By drafting this guidance, the authoring agencies seek to accomplish three goals:
Here’s a sneak peek at some of the key best practices found in the comprehensive 22-page document:
For more information about AI data security, check out these Tenable resources:
The U.S. National Institute of Standards and Technology (NIST) is also stepping up to help organizations get a handle on the cyber risks threatening AI systems. In March, NIST updated its “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST AI 100-2)” report, first published last year.
The massive 127-page publication includes:
“Despite the significant progress of AI and machine learning in different application domains, these technologies remain vulnerable to attacks,” reads a NIST statement. “The consequences of attacks become more dire when systems depend on high-stakes domains and are subjected to adversarial attacks.”
For example, to counter supply chain attacks against generative AI systems, NIST recommendations include:
Taxonomy of Attacks on GenAI Systems
(Source: “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations” report from NIST, March 2025)
The report is primarily aimed at those in charge of designing, developing, deploying, evaluating and governing AI systems.
For more information about protecting AI systems against cyber attacks:
Seeking to bring clarity to the proper way to secure AI models and systems, the European Telecommunications Standards Institute (ETSI) in April published a global standard for AI security designed to cover the full lifecycle of an AI system.
Aimed at developers, vendors, operators, integrators, buyers and other AI stakeholders, ETSI’s “Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems” technical specification outlines a set of foundational security principles for an AI system’s entire lifecycle.
Developed hand-in-hand with the U.K. National Cyber Security Center (NCSC) and the U.K. Department for Science, Innovation & Technology (DSIT), this document breaks down AI system security into five key stages and 13 core security principles:
Each one of the 13 security principles is further expanded with multiple provisions that detail more granular requirements.
For example, in the secure maintenance stage, ETSI calls for developers to test and evaluate major AI system updates as they would a new version of an AI model. Also in this stage, system operators need to analyze system and user logs to detect security issues such as anomalies and breaches.
And if you’re hungry for more technical nitty-gritty, the 73-page companion report “Securing Artificial Intelligence (SAI): Guide to Cyber Security for AI Models and Systems” offers a treasure trove of detail for each provision.
Together the technical specification and the technical report “provide stakeholders in the AI supply chain with a robust set of baseline security requirements that help protect AI systems from evolving cyber threats,” reads an NCSC blog.
For more information about AI security best practices, check out these Tenable blogs:
As organizations increasingly adopt open-source AI technologies, they also worry about facing higher risks than those posed by proprietary AI products.
That’s according to the report “Open source technology in the age of AI” from McKinsey Co., the Patrick J. McGovern Foundation and Mozilla, based on a global survey of 700-plus technology leaders and senior developers.
Specifically, while respondents cite benefits like lower costs and ease of use, they consider open source AI tools to be riskier in areas like cybersecurity, compliance and intellectual property.
If your organization is looking at or already adopting open source AI products, here are risk mitigation recommendations from the report, published in April:
For more information about open-source AI’s cybersecurity:
Using AI tools in cloud environments? Make sure your organization is aware of and prepared for the complex cybersecurity risks that emerge when you mix AI and the cloud.
That’s a key message from the “Tenable Cloud AI Risk Report 2025,” released in March and based on a telemetry analysis of public cloud and enterprise workloads scanned through Tenable products.
“Cloud security measures must evolve to meet the new challenges of AI and find the delicate balance between protecting against complex attacks on AI data and enabling organizations to achieve responsible AI innovation,” Liat Hayun, Tenable’s VP of Research and Product Management for Cloud Security, said in a statement.
Key findings from the report include:
These are some of the report's risk mitigation recommendations:
To get more information, check out:
SANS Institute has also jumped into action to help cyber defenders develop AI security skills and strategies, publishing draft guidelines for AI system security in March.
The “SANS Draft Critical AI Security Guidelines v1.1” document outlines these six key security control categories for mitigating AI systems' cyber risks.
“By prioritizing security and compliance, organizations can ensure their AI-driven innovations remain effective and safe in this complex, ever-evolving landscape,” the document reads.
In addition to the six critical security controls, SANS also offers advice for deploying AI models, recommending that organizations do it gradually and incrementally, starting with non-critical systems; that they establish a central AI governance board; and that they draft an AI incident response plan.
For more information about AI security controls:
Juan has been writing about IT since the mid-1990s, first as a reporter and editor, and now as a content marketer. He spent the bulk of his journalism career at International Data Group’s IDG News Service, a tech news wire service where he held various positions over the years, including Senior Editor and News Editor. His content marketing journey began at Qualys, with stops at Moogsoft and JFrog. As a content marketer, he's helped plan, write and edit the whole gamut of content assets, including blog posts, case studies, e-books, product briefs and white papers, while supporting a wide variety of teams, including product marketing, demand generation, corporate communications, and events.