BY Christy Lange in Opinion | 03 DEC 20
Featured in
Issue 216

Can You Spot a Deepfake?

From Stephanie Lepp to Francesca Panetta and Halsey Burgund, artists are using AI to reveal the fragility of our trust in basic information

C
BY Christy Lange in Opinion | 03 DEC 20

Barack Obama never said: ‘President Trump is a total and complete dipshit.’ But, in 2018, a deepfake of him did. Deepfakes are synthetic videos (or speech, or text) created by a machine-learning algorithm that allows you to make faces, voices and even bodies look like they are saying, doing or writing something they didn’t. They rely on a type of deep learning called generative adversarial networks, in which two neural networks combine to generate a convincing replica: one creates fake images (here are some faces that look like Obama talking), while the other tries to judge if they are fake (this doesn’t look like Obama – do better), so the result is constantly improved until you have something that looks very like the former US president. Deepfakes are made using real training data, so the more media that’s available of the person you’re trying to imitate, the more realistic the deepfake will be. The technology is improving so quickly that, as I write this, a pretty convincing deepfake of you could be generated from just your Facebook profile or Instagram stories, or even from a single photograph.

It’s easy to see how this technology might be dangerous – not just for our collective notion of reality, but also for our individual safety. Today, freely available open-source tools allow anyone with time on their hands, basic computer skills and enough processing power to cheaply create a deepfake. While computer scientists at Facebook and Google originally hoped the technology would help make holographic avatars of your dead relatives, deepfakes have bred in a different swamp. Today, roughly three years since the technology came online (when someone with the username Deepfakes posted his creations in a Reddit forum), an estimated 96 percent of deepfakes on the internet are non-consensual porn, in which celebrities’ faces are mapped onto porn actors’ bodies.

AI-merge-richard-nixon
In Event of Moon Disaster, 2020, Richard Nixon’s face merged with the movements of an actor reading a speech the former president never actually delivered. Courtesy: © Francesca Panetta and Halsey Burgund, MIT Center for Advanced Virtuality

Of course, there are safe-for-work versions, too. You can find plenty of homemade deepfakes on YouTube, in case you’ve ever wondered what Jennifer Lawrence would look like with Steve Buscemi’s face. The technology is also a boon in Hollywood, where it’s been used to replicate actors who died during filming, so that movies can be completed. A number of start-ups have put these Hollywood special effects into the hands of smartphone users, allowing them to swap themselves into film scenes in exchange for a small fee, access to their data and the intellectual property rights to whatever they create. Unsurprisingly, consumer deepfake apps are also being used to harass and exploit women. In October 2020, researchers discovered that a bot was being used to automatically generate deepfake nudes from uploaded photographs of women and girls without their knowledge (for US$8, users can access a premium version featuring additional, non-watermarked nudes) – more than 100,000 of which have already been circulating on the private-chat app Telegram.

While many experts speculated that deepfakes could influence the 2020 US presidential election, the greater danger is that they will sow chaos and mistrust in countries that have low digital literacy and unreliable news sources. In Gabon, for instance, many speculated that a 2018 address by President Ali Bongo Ondimba, released after he had been missing for months and was presumed dead, was a deepfake. Perhaps more alarmingly, politicians are using the existence of deepfakes to plausibly deny their actions: in 2019, a Malaysian federal minister claimed a leaked sex tape of him was a deepfake, and it’s nearly impossible for experts to definitively prove otherwise. For deepfakes to be an existential threat to democracy, you need bad-faith actors who are willing to weaponize the technology; but there are plenty of people

currently doing much more damage with much less. Lo-tech ‘cheap fakes’ and ‘shallow fakes’ have already succeeded in wreaking havoc on the information landscape.

Artists have been experimenting with deepfakes primarily to draw attention to their potential consequences. In 2019, Bill Posters’s deepfake video of Mark Zuckerberg telling viewers ‘whoever controls the data, controls the future’ went viral, prompting the media to address deepfakes and forcing Facebook to confront their policies on misinformation. The video is part of Spectre (2019–ongoing), an artist project by Posters and Daniel Howe, which includes a series of deepfake videos released on Instagram that use automated dialogue replacement to show Kim Kardashian, Morgan Freeman and other public figures apparently endorsing Spectre and warning about the dangers of the digital-influence industry.

deep-fake-real
Courtesy: © Francesca Panetta and Halsey Burgund, MIT Center for Advanced Virtuality

Most artistic deepfakes seem to rely heavily on the cognitive dissonance between the speaker and what they’re saying – an effect that wears off fairly quickly once you know the video is fake. Stephanie Lepp’s series of deepfakes, Deep Reckonings (2019), manifests the improbable: Zuckerberg, US Supreme Court Justice Brett Kavanaugh and far-right radio host Alex Jones publicly reckoning with their past actions. In one of the videos, a convincing voice-replacement deepfake shows Jones earnestly telling podcaster Joe Rogan: ‘I want to change, but I don’t know how.’ Each video begins with the disclaimer: ‘The following scene did not happen. Yet.’ Though Lepp imagines these videos can serve as a type of catharsis, I wonder about their long-term effect if viewers know to expect a fake, which almost challenges you to find the flaws in the technology: the blurred edges where swapped face meets neck or skin tones mismatch.

In Event of Moon Disaster (2020) may be the most fully realized and technologically accomplished deepfake artwork to date. The seven-minute video by Francesca Panetta and Halsey Burgund reimagines an event already linked to disputed footage: what would have happened if the Apollo 11 astronauts had died in their attempt to land on the moon. In the video, President Richard Nixon sits in the Oval Office and delivers (via voice actor) the actual speech that was pre-written in case of a fatal outcome. The work also announces itself as ‘not real’ and ends by explaining how it was made, urging viewers to: ‘Check your sources.’

AI-mapping
AI mapping. Courtesy: © Francesca Panetta and Halsey Burgund, MIT Center for Advanced Virtuality

Admirably, all of these artists have spoken publicly about, or published statements addressing, the ethical implications of deepfakes: who it is okay to fake, for instance, and in what situations you should obtain consent. The fact that these works warn us – either implicitly or explicitly – of the dangers of deepfakes makes them feel like cautious early experiments or awareness-raising exercises. There is still clearly potential for artists to co-opt or subvert our expectations of the technology and to make deepfakes that are more than commentaries on the novelty of the technology itself. Meanwhile, we should prepare for the moment when much of what we see on social media will be synthetically generated, from full-body deepfake avatars on our phones to algorithmically written text that is indistinguishable from text written by humans. As we’ve seen, these technologies are most likely to harm people who don’t have the power or resources to debunk them. Of course, it doesn’t take high-tech tools to draw attention to the corrosion of our trust in information. The authentic, unsynthesized videos we see every day are already hard enough to process.

This article first appeared in frieze issue 216 with the headline ‘Fake Out’.

Main Image: In Event of Moon Disaster, 2020, Richard Nixon’s face merged with the movements of an actor reading a speech the former president never actually delivered. Courtesy: © Francesca Panetta and Halsey Burgund, MIT Center for Advanced Virtuality

Christy Lange is programme director of Tactical Tech and a contributing editor of frieze. She lives in Berlin, Germany. 

SHARE THIS