Check out NIST best practices for adopting a zero trust architecture. Plus, learn how OpenAI disrupted various attempts to abuse ChatGPT. In addition, find out what Tenable webinar attendees said about their exposure management experiences. And get the latest on cyber crime trends, a new cybersecurity executive order and more!
Dive into six things that are top of mind for the week ending June 13.
Zero trust architectures’ (ZTAs) popularity has grown as traditional on-prem security perimeters dissolve with the adoption of cloud services, mobile devices, remote employees, IoT devices and more. But ZTA implementations aren’t “one-size-fits-all” affairs. They must be customized to individual environments.
To help organizations plan and deploy ZTAs, the U.S. National Institute of Standards and Technology (NIST) this week published a guide titled “Implementing a Zero Trust Architecture: Full Document (SP 1800-35).”
By offering 19 concrete ZTA implementation examples, the new guide is meant to complement NIST’s “Zero Trust Architecture (SP 800-207)” which was published in mid-2000 and unpacks what a ZTA is, as well as its components, benefits and risks.
“This guidance gives you examples of how to deploy ZTAs and emphasizes the different technologies you need to implement them,” Alper Kerman, a NIST computer scientist and the guide’s author, said in a statement. “It can be a foundational starting point for any organization constructing its own ZTA.”
To craft the new ZTA guide, NIST enlisted the help of 24 technology partners, including Tenable. “Our role? Help ensure that every device, user, and system is verified, monitored, and protected. This is what public-private partnership looks like at its best,” Tenable Senior VP of Global Government Affairs James Hayes wrote in a LinkedIn post.
In addition to the 19 examples, the guide also includes a description of these core steps applicable to all ZTA implementations:
To get more details, read:
For more information about zero trust, check out these Tenable resources:
Cyber espionage. Social engineering. Fraudulent employment schemes. Covert operations. Scams.
Those are some of the malicious uses of OpenAI’s artificial intelligence tools that the company has detected and halted in recent months.
“Every operation we disrupt gives us a better understanding of how threat actors are trying to abuse our models, and enables us to refine our defenses,” the company wrote in the report “Disrupting malicious uses of AI: June 2025,” published this week.
Specifically, OpenAI details 10 incidents with the goal of sharing how it flagged and defused them in the hopes that the lessons it learned can benefit other AI defenders.
Here’s a quick glance at three of the malicious use cases the maker of ChatGPT discusses in the report:
“We’ll continue to share our findings to enable stronger defenses across the internet,” the report reads.
For more information about AI security, check out these Tenable resources:
During our recent webinar “Security Without Silos: How to Gain Real Risk Insights with Unified Exposure Management,” we polled attendees about their exposure management knowledge, challenges and concerns. Check out what they said.
(44 webinar attendees polled by Tenable. Respondents could choose more than one answer.)
(85 webinar attendees polled by Tenable)
(89 webinar attendees polled by Tenable)
Want to learn more about how unified exposure management works in the real world? Watch this webinar on-demand!
As has been widely documented by researchers and experienced by users, AI systems often make mistakes — a major challenge for AI developers. What can be done?
A critical piece of this puzzle is to build AI systems that recognize when they’re presented with a task for which they haven’t been trained, and are able to say they don’t know how to proceed.
That’s according to the article “Out of Distribution Detection: Knowing When AI Doesn’t Know” published this week by two experts from Carnegie Mellon University’s Software Engineering Institute (SEI).
In the piece, Eric Heim, a senior machine learning research scientist, and Cole Frank, an AI workforce development engineer, explore the issue of out-of-distribution detection (OoD) — flagging when an AI system faces situations it’s not trained to tackle — with a focus on AI military applications.
“By understanding when AI systems are operating outside their knowledge boundaries, we can build more trustworthy and effective AI capabilities for defense applications — knowing not just what our systems know, but also what they don't know,” they wrote.
The authors offer three broad categories of OoD detection:
The authors caution that all three OoD detection categories have their pros and cons, and that OoD detection methods aren’t foolproof, and, as such, should be considered “a last line of defense in a layered approach to assessing the reliability of ML models during deployment.”
“Developers of AI-enabled systems should also perform rigorous test and evaluation, build monitors for known failure modes into their systems, and perform comprehensive analysis of the conditions under which a model is designed to perform versus conditions in which its reliability is unknown,” they wrote.
For more information about OoD and about AI model accuracy in general:
The Trump administration has put the spotlight on boosting the U.S. federal government’s cybersecurity posture with the recently issued Executive Order (EO) 14306.
EO 14306 aims “to strengthen the nation’s cybersecurity by focusing on critical protections against foreign cyber threats and enhancing secure technology practices,” reads a complementary White House fact sheet.
The EO addresses topics including AI system vulnerabilities, IoT security, quantum computing risk, patch management, secure software development and critical infrastructure defense.
“This EO reinforces the importance of shifting from reactive to proactive cybersecurity,” Tenable Senior VP of Global Government Affairs James Hayes wrote in a blog.
“By addressing emerging risks — such as AI exploitation, post-quantum threats and software supply chain weaknesses — the administration is signaling the need for adaptability and continuous improvement,” he added.
To learn more about EO 14306 and about how Tenable can help federal agencies comply with the EO’s requirements, check out the blog “New Cybersecurity Executive Order: What You Need To Know.”
Leveraging AI in increasingly powerful ways, cyber criminals have ramped up data theft, which they’re using as the foundation for myriad cyber attacks, including online fraud, ransomware, child exploitation and extortion.
That’s a key takeaway from Europol’s “Internet Organised Crime Threat Assessment 2025” report, published this week. The report aims to highlight major trends in cyber crime in order to help law enforcement agencies, policy makers and the tech industry respond.
“From phishing to phone scams, and from malware to AI-generated deepfakes, cybercriminals use a constantly evolving toolkit to compromise systems and steal personal information,” reads a Europol statement.
Initial access brokers (IABs) then sell, resell and repackage stolen credentials and data in dark web forums and criminal marketplaces. Cyber criminals have also upped their use of communication apps that offer end-to-end encryption to negotiate deals and sell compromised data.
With regards to AI, cyber crooks continue to abuse it, especially generative AI tools, to launch ever more sophisticated social engineering attacks. “Criminals now tailor scam messages to victims’ cultural context and personal details with alarming precision,” the statement reads.
For more information about data security, check out these Tenable resources:
Juan has been writing about IT since the mid-1990s, first as a reporter and editor, and now as a content marketer. He spent the bulk of his journalism career at International Data Group’s IDG News Service, a tech news wire service where he held various positions over the years, including Senior Editor and News Editor. His content marketing journey began at Qualys, with stops at Moogsoft and JFrog. As a content marketer, he's helped plan, write and edit the whole gamut of content assets, including blog posts, case studies, e-books, product briefs and white papers, while supporting a wide variety of teams, including product marketing, demand generation, corporate communications, and events.