Trippy Stickers Trick Computers Into Thinking A Banana Is A Toaster

Entertainment By Elena Boaghi |

The algorithms that computers use to determine what objects are–a cat, a dog, or a toaster, for instance–have a vulnerability. This vulnerability is called an adversarial example. It’s an image or object that looks one way to the human eye, but another way to the algorithm.

[Image: Google]

For years researchers believed that the adversarial example was theoretical, but in recent months they have demonstrated that it can be applied in real-world situations. One group of researchers 3D-printed a turtle that an image-classifying algorithm identifies as a rifle, and other scholars have used stickers to tweak the appearance of road signs so that computers see a yield sign rather than a 15 mph sign. Now, a new paper by Google researchers pushes the field further again by creating stickers that can convince a computer that any object is a toaster.

The researchers created a bizarre, psychedelic image they call a “patch” that, when placed next to any object in any light, is far more salient to the classification algorithm than whatever its neighbor is. That means that the computer focuses its attention on the sticker, not on any of the objects next to it. For example, place a patch designed to look like a computer next to a banana and the algorithm will see a toaster. Put it next to a dog and the algorithm will see a toaster. You get the idea.

[Image: Google]

Usually, researchers need to know what their target object is–the object they’re trying to trick the computer into seeing a toaster–to create an adversarial example. But in this case, a potential hacker could use this sticker against any object, including ones never seen before. That means it could have dangerous applications in the real-world because it would be so easy to employ in new situations. Right now, it’s not necessarily easy to generate a patch–it took five Google researchers to make the toaster sticker. But now that the paper is online and details how the researchers did it, hackers could make their own. “After generating an adversarial patch, the patch could be widely distributed across the internet for other attackers to print out and use,” the researchers write.

[Image: Google]

These patches are pretty weird-looking, which would mean that humans would be able to spot them–even if computers couldn’t. That’s different than most adversarial examples, which look indistinguishable to the human eye but can fool a computer. But the researchers showed that they could also create adversarial stickers that are disguised as tie-dye patterns, something a person wouldn’t blink twice at seeing. “Even if humans are able to notice these patches, they may not understand the intent of the patch and instead view it as a form of art,” the researchers write.

Our world doesn’t yet rely entirely on machine learning algorithms, but as computer vision systems become more integral to vehicles like self-driving cars and are increasingly used to identify security threats (like in baggage checks at the airport or through surveillance footage), more research into adversarial examples is vital. As of yet, AI security researchers say that no one knows how to defend against them.

Leave a Reply

Your email address will not be published. Required fields are marked *

5 × five =