Has generative AI contributed to or detracted from cybersecurity? That depends on who you ask–or perhaps on which side of the fence you’re sitting. While the technology has facilitated the creation of a whole new class of advanced threat detection and defense solutions, it has also enhanced the war chests of cyberattackers–adding hyper-realistic deepfakes for phishing campaigns and a wealth of other malicious toys.
So, which way does the scale tip overall? Let’s take a deep dive into both sides of the equation.
Generative AI is significantly enhancing organizational cybersecurity in numerous ways, including:
Augmenting security professionals’ reach – Security assistants like CrowdStrike’s Charlotte or Trend Micro’s Companion help magnify the reach and efficacy of analysts and other security professionals. Leveraging generative AI’s natural language capabilities, these assistants offer professionals of all skill levels readily available insights and powerful threat-hunting support.
Automated patch management – Patching carries tremendous operational and human resources overhead yet remains the root cause behind over 60% of cyberattacks. Generative AI-based automated patch management can more effectively prioritize vulnerabilities by patch type, system and endpoint–enabling more efficient and more timely patching of the most pressing vulnerabilities.
Hardening endpoints – Generative AI can analyze endpoint-generated data at scale, identifying contextual insights that enable cybersecurity platforms to better learn and respond to attack patterns. Leveraging these insights, endpoints can actually self-heal-turning themselves on or off, rechecking OS and application versions and autonomously resetting to an optimized, more secure configuration.
Advanced threat detection – Generative AI can analyze the massive amounts of data that enterprises produce–from network traffic to system logs and user behavior. It can identify patterns and anomalies that may be associated with cyberthreats. Once a threat is identified, solutions can recognize and classify malware and other malicious code, including previously unseen variants. By proactively identifying and mitigating threats, generative AI helps reduce the likelihood of a successful attack.
Automated vulnerability assessment – Generative AI can automate the previously complex and manual process of identifying vulnerabilities in software, networks, and systems. Algorithms can generate test scenarios and simulate attack scenarios, then pinpoint weaknesses and provide recommendations for remediation.
Generative AI presents numerous challenges to security professionals and can make life far easier for cyberattackers. Notably, it raises the risk of:
Data poisoning – Forester predicts that generative AI will be increasingly used by threat actors to poison data and purposefully cause concept drift to impact the efficacy of data-driven solutions like cybersecurity platforms or e-commerce recommendation engines. These attacks are subtle and difficult to detect, and can dramatically impact security and revenues.
Data privacy and confidentiality violations – According to a recent report by cybersecurity VC Team8, the use of generative AI tools in the enterprise can frequently result in the exposure of sensitive information, intellectual property, source code, trade secrets and more. When employees input such information to tools like ChatGPT, this information is at risk of exposure.
Deepfake attacks – Readily available online generative AI tools can create extremely realistic deepfake content-manipulated images, fake videos and audio and, perhaps most worryingly, human interactions. Generative AI could conduct highly realistic conversations via WhatsApp, SMS or email to build rapport with targets and ultimately obtain sensitive information or succeed in social engineering attacks like phishing emails.
Identifying and exploiting vulnerabilities – Generative AI is highly effective at analyzing and writing code. Threat actors can use these tools to identify and exploit previously unknown vulnerabilities in commonly used enterprise software or create new and dangerous malware or launch distributed denial-of-service (DDoS) and similar attacks.
Threat amplification – Generative AI can be leveraged to amplify threats once a breach has occurred. Attackers can modify code at scale, for example, gaining greater control after they achieve an initial foothold. What’s more, it can be used to conduct reconnaissance at scale once attackers breach a network–scanning massive amounts of corporate data to identify employees, relationships, and assets that can be exploited for profit or data acquisition purposes.
Security professionals rightfully have a love/hate relationship with generative AI. On the one hand, it has contributed positively to cybersecurity by augmenting the limited reach of security professionals, automating patch management, hardening endpoints, enhancing threat detection, automating vulnerability assessments and more. On the other hand, it is aiding cyberattackers by raising the risk of data poisoning, privacy violations, deepfake attacks, vulnerability exploitation and threat amplification.
Generative AI empowers security professionals while also fueling the arsenal of cybercriminals. While the technology offers significant benefits, security professionals must remain vigilant–understanding and appreciating both the benefits and risks of this world-changing technology.
Recent Articles By Author