Cybersecurity Snapshot: F5 Breach Prompts Urgent U.S. Gov’t Warning, as OpenAI Details Disrupted ChatGPT Abuses
F5网络安全公司遭国家级攻击引发CISA紧急指令要求联邦机构修复漏洞;OpenAI报告称攻击者利用ChatGPT改进传统网络攻击;企业董事会显著增加对AI和网络安全风险的监督与披露。 2025-10-17 13:0:0 Author: www.tenable.com(查看原文) 阅读量:20 收藏

F5’s breach triggers a CISA emergency directive, as Tenable calls it “a five-alarm fire” that requires urgent action. Meanwhile, OpenAI details how attackers try to misuse ChatGPT. Plus, boards are increasing AI and cyber disclosures. And much more!

Key takeaways

  1. A critical breach at cybersecurity firm F5, attributed to a nation-state, has triggered an urgent CISA directive for federal agencies to patch vulnerable systems immediately.
     
  2. A new report from OpenAI reveals that threat actors try to use ChatGPT to refine conventional attacks.
     
  3. As cyber attacks skyrocket, corporate boards are significantly increasing their oversight and disclosures related to both cybersecurity and AI risks.

Here are five things you need to know for the week ending October 17.

1 - Code red: CISA directs fed agencies to patch F5 vulnerabilities

When a cybersecurity company gets hacked, it's bad. When a nation-state steals some of its most sensitive data, it’s a catastrophe. 

That’s what happened this week, when F5 disclosed 40-plus vulnerabilities and announced that a nation-state attacker stole proprietary, confidential information about its technology and its security research, triggering an urgent U.S. government alert.

In response to F5’s announcement, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) issued Emergency Directive ED 26-01, ordering federal agencies to inventory F5 BIG-IP products, determine if they’re exposed to the public internet, and patch them.

Specifically, CISA is directing agencies to patch vulnerable F5 virtual and physical devices and downloaded software, including F5OS, BIG-IP TMOS, BIG-IQ, and BNK / CNF, by October 22, and follow the instructions in F5’s “Quarterly Security Notification.”

All organizations, not just federal civilian agencies, should prioritize mitigating the risk from F5’s breach and vulnerabilities, which can be exploited with “alarming ease” and can lead to catastrophic compromises, CISA Acting Director Madhu Gottumukkala said in a statement.

“We emphatically urge all entities to implement the actions outlined in this Emergency Directive without delay,” he said.
 

Cover image of CISA Emergency Directive ED 26-01


The F5 breach is “a five-alarm fire for national security,” Tenable CSO and Head of Research Robert Huber wrote in a blog, adding that F5’s technology is foundational “to secure everything,” including government agencies and critical infrastructure.

“In the hands of a hostile actor, this stolen data is a master key that could be used to launch devastating attacks, similar to the campaigns waged by Salt Typhoon and Volt Typhoon,” Huber wrote in the post “F5 BIG-IP Breach: 44 CVEs That Need Your Attention Now.”

“We haven’t seen a software supply chain compromise of this scale since SolarWinds,” he added.

To get all the details about the F5 breach and about how Tenable can help, read Huber’s blog, as well as the Tenable Research blog “Frequently Asked Questions About The August 2025 F5 Security Incident.”

For more information about the unfolding F5 situation:

2 - OpenAI: Attackers abuse ChatGPT to sharpen old tricks

Creating and refining malware. Setting up malicious command-and-control hubs. Generating multi-language phishing content. Carrying out cyber scams.

Those are some of the ways in which cyber attackers and fraudsters tried to abuse ChatGPT recently, according to OpenAI’s report “Disrupting malicious uses of AI: an update.”

Yet, OpenAI, which detailed seven incidents it detected and disrupted, noticed an overarching trend: Attackers aren’t trying to use ChatGPT to cook up sci-fi-level super-attacks. They’re mostly trying to put their classic scams on steroids.

“We continue to see threat actors bolt AI onto old playbooks to move faster, not gain novel offensive capability from our models,” OpenAI wrote in the report.
 

OpenAI logo


The report identifies several key trends among threat actors:

  • Using multiple AI models
  • Adapting their techniques to hide AI usage
  • Operating in a "gray zone" with requests that are not overtly malicious

Incidents detailed in the report include the malicious use of ChatGPT by: 

  • Cyber criminals from Russian-speaking, Korean-language, and Chinese-language groups to refine malware, create phishing content, and debug tools.
  • Authoritarian regimes, specifically individuals linked to the People's Republic of China (PRC), to design proposals for large-scale social media monitoring and profiling, including a system to track Uyghurs.
  • Organized scam networks, likely based in Cambodia, Myanmar, and Nigeria, to scale fraud by translating messages and creating fake personas.
  • State-backed influence operations from Russia and China to generate propaganda, including video scripts and social media posts.

“Our public reporting, policy enforcement, and collaboration with peers aim to raise awareness of abuse while improving protections for everyday users,” OpenAI wrote in the statement “Disrupting malicious uses of AI: October 2025.”

For more information about AI security, check out these Tenable resources:

3 - Anthropic: Poisoning giant LLMs is shockingly easy

Bigger isn’t better when it comes to the security of large language models (LLMs).

Conventional wisdom held that bigger models were harder to poison. Well, that ain’t so, according to a study by Anthropic.

The study, titled “Poisoning Attacks on LLMs Require a Near-Constant Number of Poison Samples,” found that an attacker doesn't need to control a huge percentage of an LLM’s training data. A small, fixed amount is enough to create a backdoor.

“Creating 250 malicious documents is trivial compared to creating millions, making this vulnerability far more accessible to potential attackers,” reads the Anthropic article “A small number of samples can poison LLMs of any size.” 

If anything, data-poisoning attacks against LLMs become easier as models scale up and their datasets grow. “The attack surface for injecting malicious content expands proportionally, while the adversary’s requirements remain nearly constant,” the study reads.

The study, conducted jointly with the U.K. AI Security Institute and the Alan Turing Institute, focused on a likely innocuous type of backdoor attack: Tampering with the LLM so that it generates gibberish text when a user inputs a specific phrase.

The researchers inserted the trigger phrase <SUDO> into a small number of training documents. They found that while 100 poisoned documents were insufficient, 250 or more were enough to reliably create a backdoor across all tested model sizes, which ranged from 600 million to 13 billion parameters.
 

Example of poisoned training document used in Anthropic research of LLM security

(Source: Anthropic article “A small number of samples can poison LLMs of any size,” October 2025.)

It is unclear if the study’s findings will hold for more complex behaviors, such as bypassing safety guardrails or generating malicious code.

“Nevertheless, we’re sharing these findings to show that data-poisoning attacks might be more practical than believed, and to encourage further research on data poisoning and potential defenses against it,” the article reads.

While the study didn’t focus on ways to defend LLMs against this type of attack, it does offer some mitigation recommendations, pointing to “clean training” practices as a way to remove backdoors in some settings.

“Defences can be designed at different stages of the training pipeline such as data filtering before training and backdoor detection and elicitation once the model has been trained to detect undesired behaviours,” the study reads. 

For more information about protecting AI systems against cyber attacks:

4 - Report: AI and cyber risks hit the boardroom

In a sign of the growing impact of AI and cybersecurity for enterprises, Fortune 100 boards of directors have boosted the number and the substance of their AI and cybersecurity oversight disclosures.

That’s the conclusion EY arrived at after analyzing proxy statements and 10-K filings submitted to the U.S. Securities and Exchange Commission (SEC) by 80 of the Fortune 100 companies in recent years.

“Companies are putting the spotlight on their technology governance, signaling an increasing emphasis on cyber and AI oversight to stakeholders,” reads the EY report “Cyber and AI oversight disclosures: what companies shared in 2025.”
 

Pie chart about AI oversight by board committees from Ernst & Young report


What’s driving this trend? Cyber threats are becoming more sophisticated by the minute, while the use of generative AI — both by security teams and by attackers — is growing exponentially.

Key findings from the report about AI oversight include:

  • Nearly half (48%) of the analyzed companies now specifically mention AI risk in their board's enterprise risk oversight, a threefold increase from 2024.
  • AI expertise on boards has also grown, with 44% of companies now mentioning AI in their description of director qualifications, up from 26% last year.
  • 40% of companies have formally assigned AI oversight to a board committee, most commonly the audit committee.
  • The number of companies disclosing AI as a standalone risk factor has more than doubled, from 14% to 36%.

Meanwhile, cybersecurity oversight practices have also matured:

  • The vast majority of companies (78%) continue to assign this responsibility to the audit committee.
  • There is a clear trend towards aligning with external cybersecurity frameworks, with 73% of companies now doing so, up from 57% in 2024 and from a mere 4% in 2019.
  • Cybersecurity preparedness is also being taken more seriously, with 58% of companies now conducting simulations and tabletop exercises, compared to only 3% in 2019.
  • The demand for directors with cybersecurity expertise remains high, with 86% of companies highlighting it as a skill that either a director has or that the board seeks.

“Board oversight of these areas is critical to identifying and mitigating risks that may pose a significant threat to the company,” the report reads.

For more information about cybersecurity and AI in the boardroom and the C-suite:

5 - NCSC: Severe cyber attacks hitting U.K. at “alarming pace”

Cyber attacks with national reverberations have shot up to four per week in the U.K., a stat that’s a wake-up call not only for all British cyber defenders but also for all business leaders.

That’s according to the U.K. National Cyber Security Centre’s (NCSC) 2025 annual review, titled “It’s time to act: Open your eyes to the imminent risk to your economic security” and covering the 12-month period ending in September 2025.

“Cyber risk is no longer just an IT issue — it’s a boardroom priority,” reads the report.

These “nationally significant” cyber incidents more than doubled, climbing to 204 from 89 in the previous 12 months.
 

Cover page of UK NCSC annual cyber report 2025


The severity of the attacks is also on the upswing. The report reveals a nearly 50% increase in "highly significant" incidents, which are those with the potential to severely impact the central government, essential services, many people, or the economy.

Cyber security is now a matter of business survival and national resilience,” Richard Horne, Chief Executive of the NCSC, said in a statement.

“Our collective exposure to serious impacts is growing at an alarming pace,” he added.

The NCSC attributes many of these attacks to sophisticated advanced persistent threat (APT) actors, including nation-states and highly capable criminal organizations. It identifies the primary state-level threats as China, Russia, Iran, and North Korea. 

In response to this escalating threat, the NCSC is urging British businesses to prioritize their cybersecurity measures, saying that cybersecurity “is now critical to business longevity and success.”

To aid in this effort, the NCSC has launched a new "Cyber Action Toolkit" aimed at helping small organizations implement foundational security controls. 

It is also promoting the "Cyber Essentials" certification, which indicates an organization has security in place against most common cyber threats and opens up the opportunity to obtain free cyber insurance.

Juan Perez

Juan Perez

Senior Content Marketing Manager

Juan has been writing about IT since the mid-1990s, first as a reporter and editor, and now as a content marketer. He spent the bulk of his journalism career at International Data Group’s IDG News Service, a tech news wire service where he held various positions over the years, including Senior Editor and News Editor. His content marketing journey began at Qualys, with stops at Moogsoft and JFrog. As a content marketer, he's helped plan, write and edit the whole gamut of content assets, including blog posts, case studies, e-books, product briefs and white papers, while supporting a wide variety of teams, including product marketing, demand generation, corporate communications, and events.


文章来源: https://www.tenable.com/blog/cybersecurity-snapshot-f5-breach-chagpt-abuse-llm-attacks-ai-governance-10-17-2025
如有侵权请联系:admin#unsafe.sh