The Golden Age of Impersonation: The Dual Role of AI in Cyber Attacks & Cyber Defense
2024-8-22 14:17:6 Author: securityboulevard.com(查看原文) 阅读量:11 收藏

 A hacker automating the creation of compelling and realistic content that mimics or impersonates legitimate brands and even people, namely company executives, is an overwhelming scenario for security teams. The hacker’s ability to launch attacks at scale adds even more complexity. Impersonation scams often start through phishing attempts to steal sensitive data. Attacks today can be executed through a myriad of communication channels, including emails, social media and mobile applications.  

Recent Articles By Author

In 2023, our researchers scanned more than a billion URLs, and reported a steep increase in phishing scams year over year (an approximate 94% increase), with impersonation scams and credential theft top of mind for attackers. These 2023 trends continued into the first half of 2024, with data showing attacks are happening faster, across a broader range of channels, and are no longer limited to traditional attack channels such as email.  

Hackers have become adept at creating fake social media accounts, sending text messages with dangerous links, and setting up malicious domains, all at scale with the use of AI.  

The impact is being felt by all. In 2023, the Federal Trade Commission (FTC) received more than 330,000 reports of business impersonation scams and nearly 160,000 reports of government impersonation scams. Combined, reported losses to these impersonation scams topped $1.1 billion for the year. 

We have arrived at a “hockey stick” moment, experiencing a parabolic spike in attacks aided by new artificial intelligence (AI) tools. Here’s the reality for brands and businesses: more than 38,000 new scam pages were created daily during the first half of 2024, driven primarily by the threat actor’s desire to fool the masses with greater sophistication and complexity with hopes of financial gain as a result. 

Claroty

AI in the Hands of Cybercriminals  

Just as every enterprise wrestles with how best to leverage these modern technologies to service customers, attackers are doing the same. It’s AI against AI.  

Cyberattackers increasingly leverage large language models, an artificial intelligence (AI) algorithm that uses deep learning techniques and large data sets to understand, summarize, generate and predict new content.  

Cybercriminals leverage LLMs to carry out impersonation frauds. LLMs provide attackers with the tools they need to create convincing and contextually appropriate content that can trick targets. For example, LLMs can be used to generate phishing emails that mimic a known contact or organization’s writing style and language patterns of a known contact or organization. Taking it a step further, they can add personal information gathered from data breaches. By automating the creation process of these scam messages, attackers can quickly scale their operations to target more individuals or organizations simultaneously. This increased efficiency and scalability powered by LLMs allows cyber attackers to conduct widespread and persistent campaigns, posing a significant challenge for security teams, making it difficult for them to stay in step with the rapidly evolving threat landscape. 

Traditional security systems often struggle to keep up with the quickly changing strategies used by cybercriminals, who we see are using advanced technology, including AI and LLMs, to carry out attacks. According to Gartner Research, through 2025, Gen AI will spike the cybersecurity resources required to secure it, causing more than 15% incremental spending on application and data security. 

The prevalent types of impersonation scams include brand impersonation and executive spoofing. Brand impersonation is big business, and a significant threat since cybercriminals are adept at creating fake websites, social media accounts, and email addresses that closely resemble a brand’s official channels. Victimized companies experience monetary loss, reputational damage and loss of customer trust. New data from the FTC shows that Best Buy (and its tech support brand, Geek Squad) and Amazon topped the list of most impersonated companies. PayPal was the third-most impersonated company. 

Executive impersonation uses an email deception tactic. Cybercriminals impersonate organizational executives to trick employees into revealing sensitive information or carrying out fraudulent transactions. According to a 2024 Report based on a survey of finance professionals, from Trustmi, 22% of respondents have already been targeted by AI-driven deep fake and executive impersonation attacks. With the rise of generative AI, attackers are deploying increasingly sophisticated cyberattacks that pose new challenges for finance and cybersecurity teams. 

AI for Good: Defending Organizations Against Impersonation Scams  

As stated, adopting AI technology has been widespread for those with malicious intent. Flip the coin, and it’s also a positive force for cybersecurity defense.  

Large Language Models (LLMs) used for good can rapidly adapt and learn from new patterns and techniques used in multi-channel attacks, enabling security teams to get ahead of cyber threats. Machine learning algorithms can detect and identify malicious patterns in real-time, enabling businesses to detect and often prevent attacks coming from the many channels mentioned—whether it be email, social media, messaging apps or websites. LLMs can be used to monitor the internet quickly to accurately identify the intent of a page or given activity with accuracy.  

Let’s face it. Phishing scams are a significant means to successful impersonation attacks because we are all human. According to the 2024 Verizon Data Breach Investigation Report, the human element was the main component of 68% of breaches in this year’s dataset. Employee education is still critical, and employees should be taught to be alert for suspicious emails that may be a threat. It seems to have worked for the luxury sports car brand, Ferrari. It was reported that In a recent attempt to defraud Ferrari, scammers impersonating CEO Benedetto Vigna were thwarted by a vigilant executive.  

Also critical is a security team’s ability to leverage AI technology to identify threats across email, social media and the dark web to identify threats. How do LLMs help identify threats? Trained LLMs can continuously learn from content, including intent and decisions. This enables the technology to adapt and make intelligent decisions about threat trends. The result is the ability for security teams to have the best chance of preventing attacks caused by human error from impacting their business.  

AI can also reduce the time it takes to remove phishing threats. In a traditional scenario, when a fraudulent site is discovered, the process of removing the fake site is time-consuming. This may include proving the site is bogus and working with a hosting provider to resolve it over time.  

Through AI, businesses can automate the takedown process and remove digital threats, reducing time.  

Security teams are under increased pressure to keep their organizations and employees safe. Pre-emptive steps among each group have never been more important. While AI is being used by cyberattackers with ill intent, it can also be a powerful tool when integrated with other security measures to provide a proactive security approach. 


文章来源: https://securityboulevard.com/2024/08/the-golden-age-of-impersonation-the-dual-role-of-ai-in-cyber-attacks-cyber-defense/
如有侵权请联系:admin#unsafe.sh