For Cybersecurity Awareness Month this year, we're taking a look at the past and future of cybersecurity. This week, deepfake technology and its potential for misuse in the wrong hands.
You may already be familiar with deepfakes as those fun videos that put celebrities in media they definitely don't belong in. But this AI-powered deception could soon become the new face (and voice) of cybercrime.
The road to deepfakes
The deepfake videos we see today would not be possible without several advancements made through academic research projects in the fields of computer vision, machine learning, and artificial intelligence.
Notable examples include Video Rewrite (1997), which could generate new facial animations from an audio source, Face2Face (2016), which made it possible to animate the facial expressions of a target video by filming a source actor, and Synthesizing Obama (2017), which significantly improved upon what Video Rewrite pioneered 20 years earlier.
2017 was also the year that the term "deepfake" was coined, when a Reddit user by the name of "deepfakes" (deep learning + fake) began posting explicit AI-generated videos that placed the faces of female celebrities onto the bodies of pornographic actresses. An article published by Motherboard that same year was the first to bring the technology, and the term, to mainstream media attention.
In that article, deepfakes explained his method, which it turned out was easy enough for anyone to pull off. "This is no longer rocket science," artificial intelligence researcher Alex Champandard was quoted saying at the time. Indeed, it's become so easy today that there are plenty of online tutorials out there now and even mobile apps that allow you to do a version of this yourself.
It's not just videos that AI is allowing anyone to fabricate: voices are now fair game as well. We've seen recent developments in speech synthesis that have made it possible to make practically anyone say anything.
In 2016, a project called VoCo was demonstrated live at Adobe Max 2016, rearranging the words in a voice recording and even generating new ones (though the technology was ultimately never officially released). In 2017, Canadian startup Lyrebird revealed a voice imitation algorithm that can mimic the speech of a real person, using as little as a one minute recording of their voice.
In 2019, Lyrebird were acquired by Descript, who offer software tools and services for podcast creators. Their AI-powered speech synthesis technology was repackaged into a tool called Overdub, which podcasters can use to mimic their voice and have it read out written text.
These days, beyond being used by podcasters, this technology is also being used to create ridiculous mashups on YouTube. But that doesn't necessarily mean there aren't others who have more nefarious uses for it in mind moving forward.
The perfect crime?
In August 2019, the Wall Street Journal reported the first documented instance of an audio deepfake being used to commit an act of financial fraud.
The CEO of a UK-based energy firm received a call from his boss, the chief executive of the firm’s German parent company. He was told he needed to transfer €220,000 (approx. $243,000) to the bank account of a Hungarian supplier, urgently. What he didn't know was that the voice he heard on the other end of the line was not his boss. In fact, it wasn't even human.
Even as recently as June of this year, an employee at a tech company received an audio deepfake voicemail from someone claiming to be their CEO requesting “immediate assistance to finalize an urgent business deal”. You can even hear the actual recording above.
What this means is that criminals have begun experimenting with deepfake technology, and it's worked at least once. Today it might be a scammer going after a business, tomorrow it might be your average hacker going after your credit card details. They may even combine audio and video deepfakes to really sell the illusion.
Before you get too paranoid, keep in mind that a deepfake is just a fancier phishing email. The same principles for spotting them still apply: be aware, use common sense, and double check with the person you think you're talking to if you feel something might be off. The threats may evolve, but your data will always need protection.