It would be hard to miss the hype surrounding ChatGPT. From the moment it became generally available, attackers found it much easier and faster to create credible fake identities and plausible phishing sites. Suddenly, creating convincing phishing emails became a breeze for those leveraging the large language model (LLM), free of the typos and odd phrasing that most users rely on to identify phishing. As artificial intelligence (AI) has become even more widely available and robust, malicious actors have already used it to develop more advanced attack methods. The strong strategy to counteract this effectively is for defenders to also leverage AI moving into 2024.
In part to get ahead of some of the risks inherent in AI, president Biden issued an executive order (EO) on safe, secure and trustworthy artificial intelligence. While the EO applies more broadly than the role of AI in cybersecurity, it does set a few key initiatives in motion, including:
● Require powerful AI system developers to share safety test results and other critical information with the U.S. government, particularly if the model poses “a serious risk to national security, national economic security, or national public health and safety.” The federal government must be notified when training the model and share the results of all red-team safety tests.
● Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology (NIST) will set standards for red-team testing to ensure safety prior to public release. Of particular note, the Department of Homeland Security (DHS) will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The DHS and Department of Energy (DOE) will also address the threats AI systems pose to critical infrastructure, as well as chemical, biological, radiological, nuclear and cybersecurity risks.
● Establish standards and best practices for detecting AI-generated content and authenticating official government content to protect Americans from AI-enabled fraud and deception. The Department of Commerce (DOC) will develop watermarks to clearly label AI-generated content as well as provide guidance for content authentication.
● Establish a cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.
● Ensure the United States military and intelligence community use AI safely, ethically and effectively in their missions.
The question is, in part, how quickly will the government take these actions, and how widely will the private sector embrace them? Based on the speed that AI has already developed, it is imperative to move fast to protect citizens in the United States and globally.
AI makes it easy to extract and leverage personal data. That puts everyone’s privacy at risk, and no amount of fines can make up for that. Companies clearly must use data to train AI systems, but the United States still lacks broad data privacy legislation. As AI becomes more powerful, we must protect the personal data of all Americans, particularly children. AI may play a role in the creation and use of privacy-preserving techniques. One way to do this is by training AI models in a manner that preserves the privacy of the training data. Perhaps the U.S. government will do this rapidly and well, but attackers may leverage the ill-gotten data from a breach to train their own AI models. And while some AI models may come with safeguards, researchers have already shown that they can be overridden fairly easily.
While research hasn’t shown that AI is particularly good at detecting AI-generated content, OpenAI, the developers of ChatGPT, shut down its AI text-detection tool over the summer because it simply wasn’t accurate enough. It now plans to introduce a cryptographic watermarking feature to make it easier to catch AI-generated content. That won’t be foolproof either, but that doesn’t mean you shouldn’t put AI to use in your organization.
AI can enhance several aspects of cybersecurity, enabling defenders to stay ahead in the AI game. Here are a few key ways AI can help:
● Threat Detection and Analysis: AI algorithms are adept at processing large volumes of data quickly. They can analyze patterns and detect anomalies that may indicate a cybersecurity threat, such as unusual network traffic or suspicious user behavior. This capability is particularly useful for identifying new and emerging threats that haven’t been cataloged yet.
● Research and Development: Cybersecurity researchers can leverage AI to process and synthesize large datasets, facilitating easier identification of patterns and insights.
● Phishing Detection: AI algorithms can analyze email content, headers, and sender details to identify potential phishing attempts, preventing end users from seeing (and potentially being fooled by) convincing AI-generated phishing emails.
● Improve Authentication: AI can utilize biometric logins and behavior analytics to make unauthorized access to your organization’s assets more challenging.
● Vulnerability Management: Use AI to identify vulnerabilities within your software and infrastructure by continuously scanning and analyzing the network and systems.
● Automated Threat Response: Once you identify a threat, AI can automate the response by isolating affected systems, blocking suspicious IP addresses, or patching vulnerabilities.
● Predictive Analytics: Using historical data, AI may be able to predict future attack patterns.
Security vendors are and will continue to integrate AI into existing security tools to enhance their effectiveness, resulting in more accurate and efficient threat detection and management. As 2024 unfolds, and the use of AI when it comes to cybersecurity continues to lead conversations, the key is to choose solutions (and vendors) that align with the EO guidelines, are committed to preserving privacy, and meet the ethical and safety AI standards defined by your organization.
Recent Articles By Author