Tom Cruise swings his golf club and hits the ball offscreen before turning to the camera and delivering an enticing promise: “if you like what you’re seeing, just wait ’til what’s coming next”.
In fact nothing about the video, which went viral after being posted on TikTok in February, is what it seems. It is just one example of a computer-generated fake, a so-called deepfake. In the case of the fake Tom Cruise video, the voice is provided by an impressionist with Cruise’s face superimposed using artificial intelligence and some slick post-production technology.
But while the video may not be real, the threat this technology represents is and it is about time governments woke up to the fact.
At present deepfake videos like this one take considerable human refinement to be truly deceptive – in this case it took a visual effects specialist weeks to iron out glitches for ten seconds of video. But as computing power improves, these barriers to entry will get lower and lower. Soon deepfakes will be easily produced on a smartphone app, and there will be few limits to their capabilities as weapons of misinformation.
A preview of the horrors to come can be found in the trend of face replacement pornography, where photographs of victims – overwhelmingly women – are edited into sexual films. Imagine that it is a student whose image is shared around her school or a popular politician on the eve of an election, and you can see the potential. By the time it is proved to be fake, millions could have seen it and at least some will continue to believe it is real.
Too often when it comes to technology, policymakers have been playing catch-up, producing legislation long after the real damage has been done; consider the Communications Act 2003, written in an age where fax machines were still a business staple, and still used today to police offensive speech online.
Now, for once, politicians have the chance to be on the front foot, to take action while this technology remains embryonic.
After years of waiting, the online safety bill (announced in the Queen’s speech) represents the first real chance for government in the UK to get a grip on pervasive online harms; granting Ofcom powers to regulate tech companies so their services are safe by design.
Back in 2016, I worked with Anna Turley, then MP for Redcar, on her bill which first proposed that Ofcom should regulate online harms. The idea is substantially the one being taken forward by Department for Digital, Culture, Media and Sport, and at its heart lies an acknowledgment that companies themselves are best placed to understand and police their own services, but they need government leadership. This is no different with deepfakes.
The Centre for Data Ethics and Innovation produced a report on deepfakes in 2019 which soberingly details the societal threats lurking within the technology. This remains the only serious endeavour to comprehend deepfakes published by government, but it has been lost in the noise of the wider policy space.
Now Ofcom and technology companies – working together – must pick up the baton and ensure that the codes of practice issued under the new regulatory regime provide detailed guidance on deepfakes, so that usage, detection, and removal polices are embedded into services’ terms and conditions. This is the first step towards taming their rise.
Beyond this, government must produce a proper deepfake strategy, that thinks holistically about where we are heading, and captures all of the disturbing harms waiting to be liberated by the technology, such as supercharged disinformation and doctored evidence in the criminal justice system Ultimately there is a strong case to be made for a new criminal offence that encompasses these new possibilities to mislead.
If there is temptation in Westminster to view technology companies with natural suspicion, it must be challenged. Not to let platforms off the hook, but because the best cadre of regulators in the world will struggle to control such a sophisticated rapidly evolving landscape. Deepfakes have the capacity to deal a double whammy to the government’s online safety plans; not only generating homegrown harms, but potentially undermining technology designed to prevent them (consider that biometric age assurance would be ineffective in the face of manipulated videos). This requires serious technical thinking, of the sort only available within the tech industry.
Above all, it requires global political leadership to draw these thoughts together and work out how these threats can be combatted. Deepfakes will not be a marginal issue in a decade, they will be front and centre to the power dynamic of the internet. Left unregulated, they do nothing less than threaten our fundamental capacity to tell the truth; the consequences of which are not yet fully apparent. If you don’t like what you’re seeing, just wait ’til what’s coming next.