Imagine that your country is in the midst of an election campaign. Videos on social media and television are one of the main sources of information about the candidates and parties you are considering voting for. They let you hear the candidates’ policies and see their charisma (or lack of it). Or maybe imagine that you’ve been accused of shoplifting and one of the main pieces of evidence is CCTV footage of the crime. Video evidence is cold hard proof, right? And yet you didn’t commit any crime. You weren’t even in the shop that day. And that politician never said those outrageous comments about his opponent. Both situations were the result of deepfake.
Unfortunately, deepfake is not just the terrifying concept of the next dystopian Sci-Fi bestseller. It is the creation of highly realistic videos using artificial intelligence. These videos can make people do or say anything that the creator wants – and it’s very hard to tell that they’re fake.
The way they work is by using GANs (Generative Adversarial Networks). It involves two networks that work off of each other – the generative network creates the media and the discriminative network evaluates the product. In the case of creating a deepfake video, the generative network starts off with not much knowledge about what a human face looks like. This makes it easy for the discriminative network to tell the difference between the two images it is given – one fake and one real. As the cycle continues, however, the generative network starts to learn, by seeing how often the discriminative network can identify the fake image. Eventually, the generative network will work out which aspects of its creations are most realistic and use this information to make images and videos that are almost indistinguishable from the real thing.
GANs are not just used for malicious purposes. They can be used for anything from fashion campaigns (saving on the cost of hiring a model) to physics and astronomy simulations. The main problem is when they are used to manipulate real people’s faces into actions that never happened, with one of the biggest issues currently being the use of deepfakes to superimpose famous actresses faces onto pornographic films. Recent research by the cyber-security company ‘Deeptrace’ reported that 96% of deepfake videos are pornographic. They also reported that there are currently over 14,000 deepfakes on the internet – nearly twice as many as this time last year.
Steps are being taken by some companies to tackle this, with both PornHub and Twitter banning deepfakes from their sites. Some perpetrators have also been jailed for sexual harassment but there is no specific law so far in the UK to prosecute creators of malicious deepfakes. However, there have been two laws passed in California that aim to limit what can be done with these videos. One law prohibits the use of deepfakes in election campaigns and the other makes it possible for victims to sue if their face is used in a deepfake pornography film without their consent.
This dangerously intelligent technology is developing far faster than previously thought, so it will be tough for governments and companies to take the correct measures to combat it in time. It also raises the question of whether the arrival other Sci-Fi concepts might be closer than we think. So, next time you’re watching Star Trek, maybe take a closer look. It may not be Science Fiction for much longer.