StrongestLayer today added an artificial intelligence (AI) tool to its email protection platform that evaluates the provenance of messages in real time to validate their legitimacy.
Company CEO Alan LeFort said AI Advisor takes advantage of a reasoning engine based on large language models (LLMs) that is able to triangulate the source of a message and other indicators, such as social media profiles, to assess whether an email is likely a phishing attack.
Designed to run as a plug-in for Microsoft Outlook and Gmail from Google, AI Advisor will, for example, assess how long the source of the message has existed to then assign a risk score to any message that is being received for the first time from an unknown sender, said LeFort.
That risk score is assessed by three LLMs that have been trained to first make a case for the legitimacy of an email, while a second builds a case against it. A third LLM then acts as a judge based on the arguments made by the first two LLMs.
That reasoning capability goes far beyond the competitively simple AI pattern-matching techniques that are currently being relied on to identify potential email threats, said LeFort.
That capability is now crucial because, as phishing attacks have become more sophisticated in the age of artificial intelligence (AI), it has become too difficult for humans to identify fraudulent messages, he noted.
In fact, it turns out most of the emails that humans suspect of being phishing attacks turn out to be legitimate, said LeFort. Cybersecurity teams, unfortunately, are wasting a lot of time investigating emails that AI Advisor could easily validate in real time, he added.
According to research conducted by StrongestLayer, security teams, on average, waste more than 160 analyst hours a quarter investigating legitimate emails, with false positive rates reaching 60-70% because employees are not able to distinguish between a phishing lure and a legitimate email. AI Advisor rescues those false positive rates to less than 1%, said LeFort.
Once a legitimate email is validated, StrongestLayer also enables cybersecurity teams to share security tips that are much more effective than a 15 to 30-minute training video, noted LeFort.
It’s not clear to what degree cybercriminals are leveraging AI to craft better phishing attacks, but as advances continue to be made, the social engineering tactics and techniques are evolving. Cybercriminals are becoming much more adept at targeting specific individuals using personal information they can now much more easily aggregate from across the web. The days when cybercriminals would launch the same email, complete with misspellings, to thousands of potential victims are coming to a close as the cost of launching more targeted campaigns continues to drop.
Each cybersecurity team will need to determine how best to apply AI to combat these threats, but it’s clear that an arms race is underway. In fact, cybercriminals are currently enjoying what might be considered a first-mover AI advantage. However, longer-term defenders might soon discover that they are likely to benefit a lot more from AI, as many tedious manual tasks that have taken far too long to complete are increasingly automated.
Recent Articles By Author