This Is What Machines See When They Look At Us

Entertainment By Elena Boaghi |

Images used to be made by people, for people. Today, there’s an entirely new kind of image: pictures taken by machines, for other machines to use. This new genre–created by cameras mounted on traffic lights, in shopping malls, on advertisements, and on computers and smartphones–is teaching computers how to see.

“You have a moment where for the first time in history most of the images in the world are made by machines for other machines, and humans aren’t even in the loop,” says the Berlin-based artist Trevor Paglen. “I think the automation of vision is a much bigger deal than the invention of perspective.”

We rarely see these images, let alone understand how computers “see” them (with a few notable exceptions, like Facebook suggesting you tag specific friends in your photos). But Paglen wants to change that. In a new exhibition at New York gallery Metro Pictures, he’s exposing these machine-made images to people.

Inside the gallery, Paglen is showing works he completed while he was an artist-in-residence at Stanford University. With the help of developers and engineers, he used algorithms to make art that interrogates the ways computers see our world. Many of the works are abstract and surreal; that’s because computers interpret images entirely differently than the way we do. The exhibition, called A Study of Invisible Images, attempts to draw back the curtain from this hidden form of image-making.

The image above is from Paglen’s training library called “The Humans”–it’s what the algorithm thinks a man looks like. [Image: courtesy of the artist and Metro Pictures, New York]

What Neural Networks Hallucinate

At Metro Pictures, one room is devoted to what Paglen calls “Hallucinations.” These are based on training libraries–or the groups of images that are used to train an algorithm to “see” a particular object. A typical training library has thousands of images of a type of object, like cats, bags, or cutlery, for instance.

But Paglen makes alternative training libraries, ones that have purposely ambiguous themes like “Omens and Portents” or “Interpretations of Dreams.” Then, he trains a neural network to recognize the objects from the training library. To identify these subjects in images, the neural net creates a generic image of what each class looks like, so it can compare that template to any other image it sees. It is these internal template images that Paglen blows up and puts on the walls of the gallery. Each image comes from a different training set; each reveals how a computer sees objects in our world based on the training library a human created for it.

The image above is from his training library called “The Humans”–it’s what the algorithm thinks a man looks like, based on the arbitrary group of images Paglen chose to teach it.

An image of what a neural network thinks a venus flytrap looks like, based on a training library of “American Predators.” [Photo: courtesy the author]

One particularly fascinating image in this series is a computer-generated image of a venus flytrap. To create it, Paglen created an entire corpus of training images themed around the concept “American Predators.” With the corpus, there are groups of images. One group is of predatory animals native to North America, like wolves and mountain lions. Another is of carnivorous plants, like the venus flytrap. A third is of military drones. And a fourth is of Facebook CEO Mark Zuckerberg–a pointed critique of the Silicon Valley titan’s dominance over our lives. Paglen then trained a neural network on the entire corpus. That means that when you show the neural net any image, the only thing it can see is predatory animals, carnivorous plants, drones, and Mark Zuckerberg.

The final image of the venus flytrap only vaguely resembles a photograph of the carnivorous plant, but darker, more pixelated, and far more abstract. For the computer, the plant doesn’t carry any meaning with it; it is simply a bunch of pixels arranged in a particular pattern.

The same goes for the other synthesized images Paglen coaxed from the depths of neural networks. One is a terrifying representation of a vampire–that’s from the training corpus called “Monsters of Capitalism,” which only includes monsters that have been used as a metaphor for the economic system (Marx loved to talk about the vampire of capitalism). “I think of AI itself as a monster of capitalism,” Paglen adds.

The work “Machine Readable Hito” shows what happens when you make a dataset of one person’s facial expressions, and then ask a machine to interpret each one. [Photo: courtesy the author]

Why We Need Art About AI

Paglen makes the point that the images included in training libraries are chosen somewhat arbitrarily. But who gets to decide what images go into a training library for real-world algorithms, which now play a vital role in every-day life?

The training library’s hidden power is evident in another of Paglen’s works, called Machine Readable Hito. It uses hundreds of images Paglen took of the artist Hito Steyerl making different expressions. Paglen then ran these images through different facial recognition algorithms that are designed to determine emotion, facial hair, if you wear glasses or not, your age, and even your gender. In the gallery, hundreds of the images are placed side by side; below each picture of Steyerl is one algorithm’s interpretation of it. Under one picture of her frowning, her hair up in a messy bun, the algorithm output reads “facialHair”: {“beard”:0.2}. One image of her with a blank face reads “gender”: {“female”: 84.48, “male”: 15.52}. But in the next one over, where she has her eyes closed and is frowning slightly, raises the percentage of “male” to 37.92. What images of men and women were this algorithm trained on, such that one expression of hers makes her more male than another?

Machine Readable Hito (detail). [Photo: courtesy the author]

For Paglen, asking these kinds of questions is the role of art in a field dominated by engineers and researchers. A lack of questioning has already led to problems with algorithms and bias, like Google’s photo recognition system classifying black people as gorillas, or scientists claiming that they can determine criminality based simply on people’s faces. As AI and machine vision become even more entwined in our lives, we won’t just need to interrogate it and hold it accountable–we will also need non-technical ways of understanding it so we can make informed decisions about what it should and shouldn’t control.

“I think art needs to be a part of those conversations,” Paglen says. “Artists have historically understood images better than anyone else. This is what what we do. For me it feels really urgent that we don’t just leave something as important to human society to guys in Silicon Valley.”

Leave a Reply

Your email address will not be published. Required fields are marked *

16 − eleven =