Formerly “AI shy” cyber pros have done a 180 and become AI power users, as AI forces data security changes, the CSA says. Plus, PwC predicts orgs will get serious about responsible AI usage in 2026, while the NCSC states that, no, prompt injection isn’t the new SQL injection. And much more!
Here are five things you need to know for the week ending December 19.
Who woulda thunk it?
Once seen as artificial intelligence (AI) laggards, cybersecurity teams have become their organizations’ most enthusiastic AI users.
That’s one of the key findings from “The State of AI Security and Governance Survey Report” from the Cloud Security Alliance (CSA) and Google Cloud, published this week.
“AI in security has reached an inflection point. After years of being cautious followers, security teams are now among the earliest adopters of AI, demonstrating both curiosity and confidence,” the report reads.
Specifically, more than 90% of respondents are assessing how AI can enhance detection, investigation, or response processes by either already testing AI security capabilities (48%), or planning to do so within the next year (44%).
“This proactive posture not only improves defensive capabilities but also reshapes the role of security — from a function that reacts to new technologies, to one that helps lead and shape how they are safely deployed,” the report adds.

(Source: “The State of AI Security and Governance Survey Report” from the Cloud Security Alliance (CSA) and Google Cloud, December 2025)
Here are more findings from the report, which is based on a global survey of 300 IT and security professionals:
“This year’s survey confirms that organizations are shifting from experimentation to meaningful operational use. What’s most notable throughout this process is the heightened awareness that now accompanies the pace of [AI] deployment,” Hillary Baron, the CSA’s Senior Technical Research Director, said in a statement.
Recommendations from the report include:
“Strong governance is how you create stability in the face of rapid change. It’s how you ensure AI accelerates the business rather than putting it at risk,” reads a CSA blog.
For more information about using AI for cybersecurity:
Do the classic pillars of data security – confidentiality, integrity and availability – still hold up in the age of generative AI? According to a new white paper from the Cloud Security Alliance (CSA), they remain essential, but they require a significant overhaul to survive the unique pressures of modern AI.
The paper, titled “Data Security within AI Environments,” maps existing security controls to the AI data lifecycle and identifies critical gaps where current safeguards fall short. It argues that the rise of agentic AI and multi-modal systems creates attack vectors that traditional perimeter security simply cannot address.

Here are a few key takeaways and recommendations from the report:
"The foundational principles of data security—confidentiality, integrity, and availability—remain essential, but they must be applied differently in modern AI systems," reads the report.
For more information about securing data in AI systems:
Is your organization still treating responsible AI usage as a compliance checkbox, or are you leveraging it to drive growth?
A new prediction from PwC suggests that 2026 will be the year companies finally stop just talking about responsible AI and start making it work for their bottom line.
In its “2026 AI Business Predictions,” PwC forecasts that responsible AI is moving "from talk to traction." This shift is being driven not just by regulatory pressure, but by the realization that governance delivers tangible business value. In fact, almost 60% of executives in PwC's “2025 Responsible AI Survey” reported that their investments in this area are already boosting return on investment (ROI).

To capitalize on this trend, PwC advises organizations to stop treating AI governance as a siloed function, and to instead take steps including:
“2026 could be the year when companies overcome this challenge and roll out repeatable, rigorous responsible AI practices,” the report states.
For more information about secure and responsible AI use, check out these Tenable resources:
If you thought ransomware activity felt explosive in recent years, the U.S. Treasury Department has the receipts to prove you right.
Ransomware skyrocketed between 2022 and 2024, a three-year period in which incidents and ransom payments grew exponentially compared with the previous nine years.
The finding comes from the U.S. Financial Crimes Enforcement Network (FinCEN) report titled “Ransomware Trends in Bank Secrecy Act Data Between 2022 and 2024.”
Between January 2022 and December 2024, FinCEN received almost 7,400 reports tied to almost 4,200 ransomware incidents totaling more than $2.1 billion in ransomware payments.
By contrast, during the previous nine-year period – 2013 through 2021 – FinCEN received 3,075 reports totaling approximately $2.4 billion in ransomware payments.
The report is based on Bank Secrecy Act (BSA) data submitted by financial institutions to FinCEN, which is part of the U.S. Treasury Department.

(Source: U.S. Financial Crimes Enforcement Network (FinCEN) report titled “Ransomware Trends in Bank Secrecy Act Data Between 2022 and 2024,” December 2025)
Here are a few key findings from the report:
How can organizations better align their financial compliance and cybersecurity operations to combat ransomware? The report emphasizes the importance of integrating financial intelligence with technical defense mechanisms.
FinCEN recommends the following actions for organizations:
For more information about current ransomware trends:
SQL injection and prompt injection aren’t interchangeable terms, the U.K.’s cybersecurity agency wants you to know.
In the blog post “Prompt injection is not SQL injection (it may be worse),” the National Cyber Security Centre unpacks the key differences between these two types of cyber attacks, saying that knowing the differences is critical.
“On the face of it, prompt injection can initially feel similar to that well known class of application vulnerability, SQL injection. However, there are crucial differences that if not considered can severely undermine mitigations,” the blog reads.
While both issues involve an attacker mixing malicious "data" with system "instructions," the fundamental architecture of large language models (LLMs) makes prompt injection significantly harder to fix.

The reason is that SQL databases operate on rigid logic where data and commands can be clearly separated via, for example, parameterization. Meanwhile, LLMs operate probabilistically, predicting the "next token" without inherently understanding the difference between a user's input and a developer's instruction.
“Current large language models (LLMs) simply do not enforce a security boundary between instructions and data inside a prompt,” the blog reads.
So how can you mitigate the prompt injection risk? Here are some of the NCSC’s recommendations:
For more information about AI prompt injection attacks:
Juan has been writing about IT since the mid-1990s, first as a reporter and editor, and now as a content marketer. He spent the bulk of his journalism career at International Data Group’s IDG News Service, a tech news wire service where he held various positions over the years, including Senior Editor and News Editor. His content marketing journey began at Qualys, with stops at Moogsoft and JFrog. As a content marketer, he's helped plan, write and edit the whole gamut of content assets, including blog posts, case studies, e-books, product briefs and white papers, while supporting a wide variety of teams, including product marketing, demand generation, corporate communications, and events.