Biometrics

Threat from deepfakes in perspective

by Mark Rowe

Voice biometrics can tackle the rise of deepfakes, writes Brett Beranek, pictured, Vice President & General Manager, Security & Biometrics Line of Business at the product firm Nuance Communications.

First it was imitations of Tom Cruise. Then, it was the imitations of Ukrainian President, Volodymyr Zelenskiy. Deepfake videos very quickly went from something that was just a bit of online fun, to something that was, potentially quite scary.

As we hear more and more about this technology and see with our own eyes how convincing it can be, concerns are of course rising for how this technology could be used to fool biometric security measures and commit fraud. After all, if a fraudster can recreate a customer’s face and voice as accurately as we’ve seen online, couldn’t they trick their way into biometrically secured accounts?

The short answer is no, thankfully it’s not that simple and particularly when it comes to voice biometrics. That’s because of how the technology has evolved over the last ten years, and how it differentiates itself from other biometric technology, making it one of the most secure ways to protect your customers from fraud.

The difference in voice

While other characteristics used in biometrics are pretty static, in that they don’t change throughout the day, our voices are very different. The way we sound when we wake up is quite different to how we sound by lunchtime, and that’s before you consider how we can change our voice depending on who we are talking to — either consciously or subconsciously.

With such variations in how we sound, you need to analyse a lot more data points to be sure you can confidently identify the human talking. However, having the power to perform that analysis quickly and at scale is no mean feat, and hasn’t always been available.

Voice biometrics have their origins in forensic science, when law enforcement would tap the phones of criminals to gather evidence. Conversations needed to be sufficiently long to ensure they had enough material to work with, and then they needed the time to perform the analysis. It was a lengthy process that’s worlds apart from the secure, seamless customer automation factor it is today.

That’s thanks to the creation of deep neural networks, which means we don’t need huge scripts of text or hours of analysis, in order to identify the person speaking. This technology can get all the data points it needs from as little as half a second of natural speech — that’s without even requiring a specific passphrase.

Accessible to all

While the complexity of this technology once meant it was reserved for only the biggest organisations, more recently, the production of accessible software built around these algorithms has opened it up for use by much smaller businesses too, including regional and community banks, and credit unions. Now any business can make this a part of their authentication process, and with confidence.

Of course, criminals have and will always try their hand, looking for cracks in the algorithm that will allow them to exploit security systems using voice. Little do they realise, the companies producing this tech are always working hard to stay one step ahead.

One of the earliest concerns was that a criminal might try to use a recording of someone’s voice to trick the technology. So of course, the technology was made so that voice biometrics could pick up the difference between a live human voice and an audio file.

However, with the deepfake phenomenon and the ability to synthesise voices becoming more accessible and powerful, the same deep neural networks that unlocked the potential of voice biometrics has been tapped into again, in order to keep fraudsters at bay.

Keeping it in perspective

That’s because when someone attempts to synthesise a voice, there are always very subtle blips and anomalies in the synthetic speech engine that can determine it as a deepfake. The very make-up of voice biometric technology is built to detect these fraudulent attempts.

Even so, it’s also important that we keep the threat from deepfakes in perspective. Deepfake technology, when used at its most sophisticated, is hugely resource intensive, so much so that fraudsters rarely use it. The majority of fraud still comes from more “run-of-the-mill” tactics, like identity theft, synthetic identities and policy abuse— all of which voice biometrics can also help to prevent, making it a real consideration for business wanting to protect their customers from fraud across the board.

The next ten years only holds more exciting opportunity for the technology too, as biometrics opens the door to a new world of remote customer interactions. By combining the power of voice with other authentication factors and AI, we are faced with an era where anything — even high-risk interactions — can be delivered remotely with incredible simplicity and high levels of confidence. Nothing fake about it.

Related News

  • Biometrics

    ONVIF on six continents

    by Mark Rowe

    ONVIF, the US-based standardisation initiative for IP-based physical security products, now claims members from six of the seven continents, with the addition…

  • Biometrics

    Enrollment on trial

    by Mark Rowe

    In Brazil, the Mato Grosso State Court of Justice is trialling biometric enrollment. In Brazil, when a person is caught in the…

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing