close
close

Is that voice real or AI? This startup says it can recognize it

Is that voice real or AI? This startup says it can recognize it

(Bloomberg) — The latest wave of artificial intelligence can mimic the voice of almost anyone — the president, a relative or a bank customer.

Most read by Bloomberg

That’s the problem and opportunity that 10-year-old audio technology startup Pindrop Security Inc. is addressing. The company has long provided voice authentication services for banks and insurance companies. Last week, it launched a new product that it says can recognize AI-generated speech in both phone conversations and digital media. It’s marketing the feature to media organizations, government agencies and social networks.

Pindrop is one of a growing number of security-conscious companies seeking to combat the threat of AI counterfeiting and fraud, including companies like Protect AI Inc. and Sam Altman’s Tools For Humanity Corp. or Worldcoin, which identifies people using eye scans.

Pindrop, which specializes in audio, made headlines in January when it discovered the source of a deepfake in which President Joe Biden robocalled people to not vote in the New Hampshire primary. The scale of the attacks is increasing: The company said it has seen a more than fivefold increase in attempted attacks targeting its customers since last year.

“It’s pretty easy to combine a voice clone and spoofing software to effectively look like someone else on the phone,” said Rachel Tobac, CEO of SocialProof Security.

Pindrop has raised money from a group of notable investors, including Andreessen Horowitz and GV. This year, the company raised $100 million in debt from Hercules Capital Inc. The company’s current valuation is $925 million.

Co-founder Vijay Balasubramaniyan started thinking about the problem of audio forgery after he wanted to buy a suit on a trip to India during his PhD. His American bank called him at around 3 a.m. to confirm the transaction and asked him for his social security number. Unable to determine who the caller was and without much information from the bank, he ended the call.

“This is crazy,” thought Balasubramaniyan on his flight back to the United States. “Telephones have been around since Alexander Graham Bell, and we still have no way of knowing who is on the other end of the line.” (He never got the suit.)

Pindrop’s technology analyzes audio data to determine whether a voice is truly human or just human. Humans speak by making certain sounds that form words, Balasubramaniyan said. But machines don’t produce sounds like humans do, and occasionally produce variants that exceed the physical limits of sound production by a human mouth. Because each second of a voice recording contains 8,000 samples, there are thousands of places where AI can make a mistake.

“The more audio data you get, the more you notice these anomalies,” says Balasubramaniyan, adding that the recognition software is language-independent because all people produce sounds in the same way.

The company claims its new tool can identify AI-generated audio with 99% accuracy. However, the limitations of AI recognition are still being debated in the industry. For teachers, researchers, and social media users, recognizing AI text and images has become a deceptive problem as technology advances. When OpenAI released a tool in March that can mimic people’s voices, the company suggested in a blog post that companies should phase out voice-based authentication for accessing bank accounts and other sensitive information.

John Chambers, a former head of Cisco Systems Inc., is a board member at Pindrop and touted Voice ID as an unusually secure form of online authentication. Chambers invested in the startup through his company JC2 Ventures. “Voice is going to be the primary method of identification in cybersecurity in the future,” he said. When voice is coupled with biometrics and data about the device being used, “it’s going to be almost impossible for someone to completely crack that,” he said.

Some in the industry have raised concerns about the proliferation of AI companies to combat AI problems. Unless laws are passed to reduce the amount of personal data available online, the industry could find itself in a perpetual battle between good and bad AI, said James E. Lee of the Identity Theft Research Center.

As security technology evolves, so will the threats. It’s possible that malicious actors could train an algorithm to bypass the controls that companies like Pindrop use to identify deepfakes, said Andrew Grotto, a cybersecurity policy expert at Stanford University. “It ends up being this arms race, this cat-and-mouse game between the defenders and the threat actors,” Grotto said.

Most read by Bloomberg Businessweek

©2024 Bloomberg L.P.

Leave a Reply

Your email address will not be published. Required fields are marked *