close
close

Organizations rely on biometrics to counter deepfakes

Organizations rely on biometrics to counter deepfakes

The risk of deepfakes is increasing: According to iProov, 47% of organizations have encountered a deepfake and 70% of them believe that deepfake attacks created with generative AI tools will have a significant impact on their organizations.

Threat from deepfakes

The perception of AI is hopeful: 68% of organizations believe that while it creates cybersecurity threats, 84% believe it helps protect against those threats, according to a new global survey of technology decision makers by iProov, which also found that 75% of solutions being implemented to combat the deepfake threat are biometric solutions.

While companies are realizing the efficiency gains that AI can bring, threat technology developers and malicious actors are also reaping these benefits. Nearly 73% of companies are implementing solutions to address the deepfake threat, but confidence is low. The study identified the main concern that companies are not doing enough to combat it, with 62% worried that their company is not taking the deepfake threat seriously enough.

The threat of deepfakes is a real and present threat

The survey shows that organisations recognise that the threat of deepfakes is a real and present one. They can be used against people in numerous harmful ways, including defamation and reputational damage, but perhaps the most quantifiable risk is financial fraud. Here they can be used to commit large-scale identity fraud by impersonating individuals to gain unauthorised access to systems or data, initiate financial transactions or deceive others into sending money, on the scale of the recent deepfake scam in Hong Kong.

The stark reality is that deepfakes pose a threat in any situation where a person needs to remotely confirm their identity, but respondents fear that organizations are not taking this threat seriously enough.

“We’ve been observing deepfakes for years, but what’s changed in the last six to 12 months is the quality and ease with which they can be created, and the potential to cause great harm to organizations and individuals alike,” said Andrew Bud, CEO of iProov. “Perhaps the most overlooked use of deepfakes is the creation of synthetic identities that, because they are not real and have no owner to report their theft, go largely undetected while they wreak havoc and defraud organizations and governments of millions of dollars.”

“And contrary to what some may believe, it is now impossible to detect high-quality deepfakes with the naked eye. Although our research found that half of the organizations surveyed have encountered deepfakes, the likelihood is that this number is much higher as most organizations are not properly equipped to detect deepfakes. With the threat landscape evolving rapidly, organizations cannot afford to ignore the resulting attack vectors and seeing how facial biometrics have proven to be the most resilient solution for remote identity verification,” adds Andrew Bud.

Generative AI is the basis for all types of deepfakes

The study also reveals some fairly nuanced perceptions of deepfakes on a global level. Organizations from Asia-Pacific (51%), Europe (53%), and Latin America (53%) are significantly more likely to report having encountered a deepfake than organizations from North America (34%). Organizations from Asia-Pacific (81%), Europe (72%), and North America (71%) are significantly more likely than organizations from Latin America (54%) to believe that deepfake attacks will impact their organization.

Given the ever-changing threat landscape, the tactics used to penetrate organizations are often similar to those used in identity fraud. Not surprisingly, deepfakes now rank third on the list of most frequently cited concerns among survey respondents, in the following order: password breaches (64%), ransomware (63%), phishing/social engineering attacks (61%), and deepfakes (61%).

There are many different types of deepfakes, but they all have one thing in common: they are created using generative AI tools. Organizations are realizing that generative AI is innovative, safe, reliable, and helps them solve problems. They view it as ethical rather than unethical and believe it will have a positive impact on the future. And they are taking action: only 17% have failed to increase their budget for programs that consider the risk of AI. In addition, most have implemented policies for the use of new AI tools.

Organizations prefer facial and fingerprint biometrics for most tasks

Biometrics have emerged as a preferred solution among organizations to address the threat of deepfakes. Organizations reported that they are most likely to use facial and fingerprint biometrics, but the type of biometrics may vary depending on the task.

For example, the study found that companies consider facial recognition to be the most suitable additional authentication method to protect against deepfakes during account access/login, changes to personal account information, and typical transactions.

The study clearly shows that companies view biometrics as an area of ​​expertise: 94% agree that a biometric security partner should be more than just a software product.

The organizations surveyed said they are looking for a solution provider that is evolving and keeping up with the threat landscape. Continuous monitoring (80%), multimodal biometrics (79%) and liveness detection (77%) are at the top of their requirements to adequately protect biometric solutions from deepfakes.

Leave a Reply

Your email address will not be published. Required fields are marked *