The British National Cyber Security Centre (NCSC) says it expects Artificial Intelligence (AI) to heighten the global ransomware threat.
In a report, the NCSC makes the assessment that AI will almost certainly increase the volume and heighten the impact of cyberattacks over the next two years. We’re already seeing that cybercriminals of all trades are using AI in the initial stages of attacks to increase their effectiveness. The NCSC confirmed, saying:
“All types of cyber threat actor – state and non-state, skilled and less skilled – are already using AI, to varying degrees.”
The NCSC expects the volume and the impact of cyberattacks to grow over the next two years.
The volume is expected to grow because AI lowers the barrier for entry-level cybercriminals to carry out effective access and information gathering operations.
The impact is expected to grow for several reasons:
Generative AI (GenAI) can already be used to create and entertain a convincing interaction with victims, including the creation of lure documents, without the translation, spelling, and grammatical errors that used to reveal phishing.
The NCSC expects that by 2025, GenAI and large language models (LLMs) will make it difficult for everyone, regardless of their cybersecurity posture, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing, or other social engineering attempts.
Currently only state sponsored groups, professional spyware vendors, and the large criminal operations have access to, and know how to use advanced AI tools to increase the effectivity of their attacks. But that availability will undoubtedly grow.
In how far new moves on the front of a United Nations Cybercrime Treaty will have a short-term effect on the behavior of state-sponsored groups is very hard to predict. I’m inclined to say that international legislation has never stopped any hacktivist before, they were just more careful about revealing their location and their principals.
Professional spyware vendors have deep enough pockets to invest in new tools, training, and development. We can expect them to use AI to find new zero-day vulnerabilities and new exploits for largely unpatched vulnerabilities.
As we at Malwarebytes Labs have tested ourselves, ChatGPT can be used to write ransomware. While this may draw new players to the field, they are not expected to have an immediate impact on the threat level. But the NCSC does expect AI to play a larger role in the near future when it comes to the development of malware and exploits.
Since ransomware is the most profitable form of malware at the moment, and this is expected to stay that way, this threat is likely to see the largest increase in volume. Which means that for the visible part of cybercrime, the landscape is not likely to change dramatically. The numbers may increase and the sophistication of the attacks is likely to grow, but the type of malware is probably the same.
To sum up the conclusion of the report, NCSC chief executive Lindy Cameron said:
“The emergent use of AI in cyberattacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term.”
Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.