Biometrics: A Flash Point in AI Regulation
2024-4-22 21:59:34 Author: securityboulevard.com(查看原文) 阅读量:5 收藏

According to proprietary verification data from Onfido (now a part of Entrust), deepfakes rose 3100% from 2022 to 2023. And with the increasing availability of deepfake software and improvements in AI, the scale and sophistication of these attacks are expected to further intensify. As it becomes more difficult to discern legitimate identities from deepfakes, AI-enabled biometrics can offer consumers, citizens, and organizations some much-needed protection from bad actors, while also improving overall convenience and experience. Indeed, AI-enabled biometrics has ushered in a new era for verification and authentication. So, with such promise, why is biometrics such a flash point in AI regulatory discussions?

Like the proverb that warns “the road to Hell is paved with good intentions,” the unchecked development and use of AI-enabled biometrics may have unintended – even Orwellian – consequences. The Federal Trade Commission (FTC) has warned that the use of AI-enabled biometrics comes with significant privacy and data concerns, along with the potential for increased bias and discrimination. The unchecked use of biometric data by law enforcement and other government agencies could also infringe on civil rights. In some countries, AI and biometrics are already being used for mass surveillance and predictive policing, which should alarm any citizen.

The very existence of mass databases of biometric data is sure to attract the attention of all types of malicious actors, including nation-state attackers. In a critical election year with close to half the world’s population headed to the polls, biometric data is already being used to create deepfake video and audio recordings of political candidates, swaying voters and threatening the democratic process. To help defend against these and other concerns, the pending EU Artificial Intelligence Act has banned certain AI applications, including biometric categorization and identification systems based on sensitive characteristics and the untargeted scraping of facial images from the web or CCTV footage.

The onus is on us … all of us

Legal obligations aside, biometric solution vendors and users have a duty of care to humanity to help promote the responsible development and use of AI. Crucial is the maintenance of transparency and consent in the collection and use of biometric data at all times. The use of diverse training data for AI models and regular audits to help mitigate the risk of unconscious bias are also vital safeguards. Still another is to adopt a Zero Trust strategy for the collection, storage, use, and transmission of biometric data. After all, you can’t replace your palm print or facial ID like you could a compromised credit card. The onus is on biometric vendors and users to establish clear policies for the collection, use, and storage of biometric data and to provide employees with regular training on how to use such solutions and how to recognize potential security threats.

AIE

It’s a brave new world. AI-generated deepfakes and AI-enabled biometrics are here to stay. Listen to our podcast episode on this topic (link TBD) for more information on how to best navigate the flash points in AI and biometrics.

The post Biometrics: A Flash Point in AI Regulation appeared first on Entrust Blog.

*** This is a Security Bloggers Network syndicated blog from Entrust Blog authored by Jenn Markey, Aled Lloyd Owen. Read the original post at: https://entrustblog.wpengine.com/2024/04/biometrics-a-flash-point-in-ai-regulation/


文章来源: https://securityboulevard.com/2024/04/biometrics-a-flash-point-in-ai-regulation/
如有侵权请联系:admin#unsafe.sh