A video of Facebook founder Mark Zuckerberg circulates online. ‘Spectre showed me how to manipulate you into sharing intimate data about yourself and all those you love, for free,’ the multi-billionaire seems to say. Yet, something isn’t right: his face appears to be slightly stiff; his gesticulations are somewhat out of sync with the modulation of his voice, which, in itself, sounds a little skewed.
The video is a deepfake: a technique in which existing images are superimposed onto film using AI. Posted on Instagram by a group of artists (@Bill_Posters_UK), it is a stellar production, despite looking slightly off-key. Doctored videos of this nature have become so sophisticated that they are increasingly difficult to debunk, prompting governments to call for their being outlawed. Posted on the same Instagram account is a video of the artist Marina Abramović. ‘I am obsessed with dying, and Spectre is there to deal with the future,’ she says. I know it isn’t her, yet I replay the video, just in case.
A story that could end with an internet consumed by undetectable fake news started with fake pornography. In 2017, an adult film, in which the actress Daisy Ridley’s face was super-imposed onto the body of a porn star, circulated on the internet by way of Reddit. In the proceeding months, several more doctored adult films featuring the faces of (mostly women) celebrities were disseminated, causing many of them to publicly discredit the counterfeit films. In 2018, actress Scarlett Johansson called the internet ‘a vast wormhole of darkness that eats itself’ but admitted that doing anything about the deepfake videos was a useless legal pursuit. Given that there are developers who, reportedly, will now make a fake sex tape for just USD$20, it is becoming ever-harder to stop them from materializing.
Last year, messaging app Snapchat’s Face Swap function granted users the ability to exchange their own faces with those of celebrities, friends or pets. It may be hyperbole to claim that the normalization of this software via social media is helping to breed a culture of falsification, but it is certainly pushing us closer to a future of disinformation. This month, scientists from Stanford University announced the development of an algorithm that enables users to edit videos of people speaking to change what they are saying merely by typing an alternative script. While not all uses of video manipulation are seedy or abusive – Florida’s Salvador Dalí Museum has used AI and actors to create a hologram of the artist, for instance – it’s undoubtedly a slippery slope when our sensory tools can no longer distinguish real life from the ‘reality’ advanced algorithms have the capacity to generate. While the desire to fabricate has always existed, the increasing overlap between the real and the realistic is rendering the preservation of authenticity on the internet a vain pursuit.