Nametag today extended its identity verification platform to add an ability to detect and block deep fake attacks created using generative artificial intelligence (AI) technologies.
Announced at Oktane, a conference hosted by Okta, Deepfake Defense will be integrated with multiple identity management platforms starting with the one provided by Okta.
Deepfake Defense makes use of cryptographic attestation provided by either Apple or Google to first verify the data being shared has come from a device running software that has been authenticated. It then goes a step further using an adaptive document verification capability to detect whether, for example, a PDF file has been forged or digitally manipulated.
Finally, it confirms human likeness, liveness and presence using Spatial Selfie technology that applies biometrics and sensor data to a three-dimensional image of a person that is then mapped to a two-dimensional identification such as a driver’s license. That capability prevents cybercriminals from, for example, donning a mask that might enable them to bypass a verification system.
Nametag CEO Aaron Painter said collectively those capabilities will thwart the two major types of deepfake attacks involving the use of fraudulent documents and false representation of specific individuals. Nametag previously developed its identity verification engine to verify, for example, the identity of an employee requesting help desk assistance.
That capability is now being extended to thwart cyberattacks involving documents, selfie photos, and videos created by AI tools that are capable of bypassing existing Know Your Customer (KYC) verification checks.
It’s now only a matter of time before cybercriminals use these tools to perpetuate even more fraudulent activity than they already do. That fraudulent activity already costs the global economy billions of dollars. A recent report from Juniper Research predicts that with the rise of generative AI, those costs will rise to $107 billion by 2029 as more workflows are compromised using deep fakes.
There are other approaches to combatting deep fakes, including relying on watermarks to verify documents and using neural processing units (NPUs) to detect anomalies in audio files. However, it may not be possible to pervasively apply watermarks to every document. As for NPUs, it might be years before the next-generation AI PCs that come equipped with NPUs needed to run deep fake detection software are widely deployed. In the meantime, the modern digital economy might soon be under siege from deep fakes that with each passing day are getting simpler to create.
There is little doubt that AI is changing the cybersecurity game in a way many organizations have yet to fully appreciate. Basic practices such as using strong passwords, enabling multifactor authentication (MFA) and training can still go a long way toward thwarting cyberattacks but as they increase in sophistication in the age of AI it’s clear additional techniques and technologies will be needed to ensure digital processes and transactions are legitimate. The issue now is that cybercriminals are starting to master the technologies needed to launch these attacks a lot faster than cybersecurity teams are currently able to thwart using legacy technologies.
Recent Articles By Author