Featured in
Issue 196

Surveillance, Bias and Control in the Age of Facial Recognition Software

What is the future of our photos in an age when images – and the machine-readable data they contain – no longer belong to us? 

C
BY Christy Lange in Features , Thematic Essays | 04 JUN 18

Facebook’s DeepFace algorithm can identify a face with more than 97 percent accuracy. This is thanks, in part, to the billions of photographs we have uploaded to the platform.1 Your selfie isn’t just shared with your family, friends and followers. It also holds biometric data that can be circulated and repurposed in unknown or unintended ways, and can be catalogued and analysed by facial recognition technology. What is the future of our photos in an age when such images – and the machine-readable data they contain – no longer belong to us?

Most facial recognition software first uses an algorithm to differentiate faces from other things in a photograph. (The fact it’s a ‘face’ is incidental; since it is only rendered through the computer’s ‘eyes’ as numerical data, it could just as easily be any other feature or object.) Once the software detects a face, it compares it to a set of others to find the best statistical ‘match’: a calculation that is always relative and, therefore, can never be 100 percent accurate. How do these machines learn to see and recognize us? They are fed a diet of faces, which are collected in various datasets. These training libraries, many of which are circulating online and can be freely downloaded, are harvested from an ever-growing trove of visual material: celebrity photos, mugshots, academic studies, YouTube tutorials, protest photos and selfies scraped from social media – with or without our knowledge or consent.

Paolo Cirio, Obscurity, Mugshots.com No. 2, 2016. archival inkjet on 350 gsm paper, 84 × 104 cm. Courtesy: the artist and NOME Gallery, Berlin

Facial recognition technology far exceeds the limits of human perception. Some software can identify you in the dark; some can pinpoint you in a 100 × 100 pixel image (one hundredth the size of an Instagram post). But an algorithm is only as accurate as the data set it is trained with – and is determined by the humans who program and design it. This has spawned dubious scientific applications (last year, Stanford University researchers claimed that AI could ‘detect sexual orientation’) as well as inevitable bias, which continues to unmask itself. In 2015, Google Photos de-activated its search criteria for ‘chimpanzee’ and ‘gorilla’ following repeated incidences of African-Americans being wrongly tagged as such. 

Many of us had our first tangible experience with the features and flaws of facial recognition in January, on the Google Arts and Culture app, which encouraged users to upload a selfie to help an algorithm find their closest lookalike in a database of historical artworks. Enthusiasm was followed by widespread reports that the app matched people of African descent with renderings of slaves and Asian users with portraits of geishas. The app exposed what happens when a facial recognition training set is limited to socially constructed frameworks – in this case, museum collections. But these same political, historical and ideological biases are also built into the algorithms designed to profile potential criminals or target suspected terrorists.

Although facial recognition technology can unlock our iPhones, automatically tag our friends and create a talking puppy Animoji, it is chiefly at work beyond our vision – in tandem with CCTV cameras in public and private spaces. Dubai International Airport plans to install a ‘virtual aquarium’ tunnel, which encourages travellers to look around and marvel at animated fish while 80 hidden cameras capture their faces from as many angles as possible. A 2016 study found that nearly half of US adults, regardless of whether or not they have a criminal record, are part of police and FBI facial recognition data-bases.2 In some Chinese cities, law enforcement officers are equipped with sunglasses with inbuilt facial recognition technology, in order to identify suspects (or non-suspects) in real time. The technology is, of course, employed militarily in the ‘war on terror’ and countless other conflicts,3 while private companies are keen to market their software to militaries worldwide. The Israeli start-up Faception, for example, claims it can use facial recognition to determine how ‘likely’ someone is to be a ‘terrorist’.4 When you factor in the known biases and the fact that facial recognition can never be 100 percent accurate, the hazards of relying upon it in war and policing become evident.

In many ways, these techniques are nothing new, they are just more powerful. Photography has been closely linked with policing and control since its invention. ‘Every proper portrait has its lurking, objectifying inverse in the files of the police,’ wrote Allan Sekula in his 1986 article ‘The Body and the Archive’. Sekula examined how, in the 19th century, photographic-portrait archives were used to derive and catalogue a ‘criminal type’. ‘On the one hand, the photographic portrait extends, accelerates, popularizes and degrades a traditional function [...] that of providing for the ceremonial presentation of the bourgeois self,’ Sekula wrote of the earliest photographs. ‘At the same time [...] photography came to establish and delimit a terrain of the other, to define both the generalized look – the typology – and the contingent instance of deviance and social pathology.’ Sekula’s analysis eerily presages contemporary applications of facial recognition.

Dries Depoorter, Surveillance Paparazzi, 2018, installation view, Frankfurter Kunstverein. Courtesy: the artist; photograph: © Frankfurter Kunstverein/ Norbert Miguletz

Instead of thinking about our faces being read by algorithms and entered into training databases, it is helpful to consider them as being archived. Then, the question becomes: who controls that archive? Facial recognition technology requires visual libraries in order to learn and become more efficient. Our family photos, our selfies, our profile pics: everything helps build and expand a collection that does not belong to us but to law enforcement agencies, governments, tech companies and anyone who has the ability to access it – legally or illegally. It doesn’t live in a filing cabinet but is virtual, flexible, accessible to almost everybody and, theoretically, eternal.

Increasingly, artists such as Zach Blas, Paolo Cirio and Heather Dewey-Hagborg are making clear what’s at stake for the future of photographic portraiture in the age of facial recognition technology, when biometric analysis can be applied to any portrait that is uploaded and shared. They have taken up the task of making facial recognition technology more visible and legible, reverse-engineering it, revealing its biases, re-appropriating it and suggesting ways in which it might be subverted.

Berlin-based artist and researcher Adam Harvey is heavily involved in exposing and short-circuiting such technologies. His interactive work, MegaPixels (2017–ongoing), scans your face and near-instantaneously matches you with your doppelgänger. But, instead of mining a database of painted portraits in museum collections, it compares you to a dataset called MegaFace (V2), the largest publicly available algorithmic training set (4.2 million photographs; 672,000 unique identities). Where do these MegaFaces come from? They were scraped from images on Flickr, unbeknownst to the users who posted them. Like the Google Arts and Culture app, MegaPixels may match you with someone you think looks nothing like you. But, unlike it, there is the chance you will match with your own photo. As I watched the algorithm combing the archive for my match, I couldn’t help considering the dystopian end point of such a process: 100 percent accuracy for every face, which, in turn, would mean an archive containing every face. This may sound far-fetched, but the Chinese government’s Sharp Eyes programme is currently working towards that goal.

Belgian artist Dries Depoorter’s Surveillance Paparazzi (2018) similarly attempts to expose how facial recognition technology might be employed in the wild. Two screens scroll through real-time footage from hacked commercial CCTV cameras while Depoorter’s algorithm, using Microsoft Azure’s Computer Vision API for celebrity recognition, homes in on faces to identify them. As the technology is trained on a vocabulary of VIPs, it constantly ‘recognizes’ celebrities in the faces of customers in electronics stores or laundromats. With 72 percent certainty, it declares it has spotted American footballer Kevin Garrett and popstar Charli XCX in an appliance shop. In illustrating the arbitrary nature of facial identification (not to mention how vulnerable the data is to security breaches), Surveillance Paparazzi provides a glimpse into how this combination of surveillance and identification might work in law enforcement, where the FBI’s matching algorithm fails in nearly 15 percent of cases.5

A police officer wearing smartglasses with a facial recognition system, Zhengzhou, China, 2018. Courtesy: AFP/ Getty Images

‘A Study of Invisible Images’, Trevor Paglen’s 2017 show at Metro Pictures in New York, reflected his prolonged pursuit of understanding how machines see. It Began as a Military Experiment (2017), Paglen’s exploration of the origins of facial recognition, features a series of ten deadpan portraits against a neutral background. The images are culled from the FERET database, which was commissioned by the Department of Defense in the 1990s to help develop the US military’s earliest facial recognition algorithm. Without access to an existing database, the developers photographed military employees to train the system, ten of whom are pictured in Paglen’s work. It’s unlikely that the subjects imagined their portraits would become prototypes for future training libraries, nor that they would become publicly available or turned into art, suggesting the perpetual and flexible nature of such archives.

Paglen’s video installation Behold These Glorious Times! (2017) is a frenetic sequence of various categories of imagery, from scientific screen-tests to home videos of babies. The images flash before us at a speed our eyes can barely process, but computers can read with ease. Paglen mixes the footage with black and white grids that represent what the computer ‘sees’ when it processes that information. It’s a slice of the vast and exponentially expanding image archive that we are helping populate, both willingly and unwillingly, with our workout routines, our embraces, our memories and behaviours. Seeing them grouped into categories gives us a better sense of how machines perceive: in patterns, not uniqueness.

Adam Harvey, MegaPixels, 2017–ongoing, facial recognition kiosk linked to database. Courtesy: the artist; photograph: David Mirzoeff

In a 2016 interview with the artists Adam Broomberg & Oliver Chanarin, about their series ‘Spirit Is a Bone’ (2015), Eyal Weizman explains: ‘The archive is a tool, and the minute you create a tool it could be used in many ways: it’s out of control of its makers.’ For ‘Spirit Is a Bone’, Broomberg & Chanarin appropriated 3D-facial-mapping software first designed for Russian surveillance forces to covertly photograph and catalogue citizens, thereby creating ‘non-collaborative portraits’. Working in deliberate co-operation with their subjects – including political dissidents such as the poet Lev Rubinstein and Yekaterina Samutsevich, a member of Pussy Riot – the artists used the software to produce their own archive of Russian citizens that mirrors the ‘categories’ in August Sander’s famous collection, ‘Citizens of the Twentieth Century’ (1892–1952). By using the technology to make portraits of these Russian dissidents, they suggest an alternative archive that can work to counter its original purposes. As Weizman puts it: ‘Once a photograph has been used in a particular way and returned to the archive, it has the potential to be read again; its potential will always be in excess of the particular history that produced it.’

In the age of biometric authentication, every portrait is a mirror that can be turned back around: to better surveil us, to more tightly police us, to recognize our characters in order to sell to us and to optimize artificial intelligence systems. Through the eyes of algorithms, these portraits no longer ‘resemble’ us and, because they are no longer physical, can circulate and exchange hands in perpetuity. As Broomberg & Chanarin admitted: ‘There’s a loaded sense of responsibility in the use and creation of archives such as this.’ Today’s artists are acknowledging the risk and responsibility involved in these collections of data in ways that the state and, if Cambridge Analytica is any indication, tech companies have yet to fully take on board.

1 Tom Simonite, ‘Facebook creates software that matches faces almost as well as you do’, MIT Technology Review, 17 March 2014

2 Clare Garvie, Alvaro Bedoya and Jonathan Frankle, ‘The Perpetual Line-Up: Unregulated Police Face Recognition in America’, Georgetown Law Center on Privacy & Technology, 18 October 2016, perpetuallineup.org

3 The US military is now collaborating with major private tech companies like Amazon Web Services, which contracts to the Department of Defense, to use facial recognition for targeting in warfare. ‘Amazon Rekognition for Demo for Defense’, 7 August 2017, aws.amazon.com/blogs/publicsector/amazon-rekognition-demo-for-defense

4 Gus Lubin, ‘“Facial profiling” could be dangerously inaccurate and biased, experts warn’, Business Insider, 12 October 2016

5 Olivia Solon, ‘Facial recognition database used by FBI is out of control, House committee hears’, Guardian, 27 March 2017

This article appears in the print edition of frieze, June/July/August 2018, issue 196, with the title 'Face to Face'.

Main image: Trevor Paglen, Behold These Glorious Times!, 2017, video projection still. Courtesy: the artist and Metro Pictures, New York

Christy Lange is programme director of Tactical Tech and a contributing editor of frieze. She lives in Berlin, Germany. 

SHARE THIS