> stats-data-ia > imagerie > one-pixel-attack-defeats-neural-networks-two-minute-papers-240

One Pixel Attack Defeats Neural Networks | Two Minute Papers #240

Two Minute Papers - 2018-03-31

The paper "One pixel attack for fooling deep neural networks" is available here:
https://arxiv.org/abs/1710.08864

This seems like an unofficial implementation:
https://github.com/Hyperparticle/one-pixel-attack-keras

Differential evolution animation credit: https://pablormier.github.io/2017/09/05/a-tutorial-on-differential-evolution-with-python/

Our Patreon page: https://www.patreon.com/TwoMinutePapers

We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Evan Breznyik, Malek Cellier, Frank Goertzen, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil.
https://www.patreon.com/TwoMinutePapers

One-time payment links are available below. Thank you very much for your generous support!
PayPal: https://www.paypal.me/TwoMinutePapers
Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A
LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg

Two Minute Papers Merch:
US: http://twominutepapers.com/
EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/

Thumbnail background image credit: https://pixabay.com/photo-3010129/
Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Facebook: https://www.facebook.com/TwoMinutePapers/
Twitter: https://twitter.com/karoly_zsolnai
Web: https://cg.tuwien.ac.at/~zsolnai/

Diego Antonio Rosario Palomino - 2018-03-31

So basically like in cartoons when a villain's disguise consists solely of a facial mole

Damian Reloaded - 2018-03-31

AIs would never be able to find out who Superman is. ^_^

Moments - 2018-04-18

they done a study on that, superman disguise just wearing glasses.

Rachel Slur - 2019-02-08

👍

Ionut Ciuta - 2018-04-11

This explain why everytime I had a tiny spec of dust on my glasses all I could see were frogs and airplanes!

Vergy Exploding - 2018-05-18

lolol

Socks With Sandals - 2019-11-08

I have one dead pixel on my TV and that's all I see too.
Why are frogs and aeroplanes discussing recipes with celebrities?

London England - 2018-04-01

In Minority Report - Tom Cruise could just hide from the cameras by drawing a dot on his face, and they would think he's a frog.

Paul Newton - 2018-04-02

Or wear ridiculous makeup like in most sci-fi movies that were ahead of their time without knowing it .https://cvdazzle.com/

Serenity Departed - 2018-04-04

If you distort the image of your face it may become unrecognizable. Moreover, as far as chairs are considered as difficult object to classify, if you take a human-unlikely pose, then the AI may think you are a chair. Also change some of your pixels, to be more sure that skynet servants can't find you.

Damian Reloaded - 2018-04-18

^^ Or go to the alternate dimension where humans are chairs' chairs. XD

Jim Steinbrecher - 2018-05-07

tom cruise is actually too small to be detected by most security cameras.

Sophia Cristina - 2019-07-05

@Paul Newton Amazing, thanks for the info...

AaronGrooves - 2018-04-01

This is by far the funniest video you've ever produced! Or...maybe it's not a video at all. Maybe it's an airplane!

Felix Kütt - 2018-04-01

Add enough forward momentum and everything is an airplane. Even a brick can and will fly. So the AI is not wrong.

ruanpingshan - 2018-04-01

Is this an April Fools joke?

Nick Ellis - 2018-04-25

Felix Kütt this comment was dumb

Guilherme Sartorato - 2018-04-26

Nick Ellis please...

Justin Hutchison - 2018-05-22

hilarious

DonCDXX - 2018-03-31

I think an exact pixel count for disrupting the neural network is an imprecise way of measuring it. Shouldn't it be % of incorrect pixels required to cause a disruption?

Pedro Blanc - 2018-03-31

DonCDXX I agree. Further, I wonder how this technique scales with increase in resolution.

Up to what point one pixel is enough to noticeably change the confidence of the net. Also, with higher resolution and therefore more detailed features in each class, does changing a square area of the same proportion as presented on this paper still works?

Hyperparticle - 2018-03-31

These, are great questions, and you could try out these experiments yourself! I made an (unofficial) Keras implementation where you can experiment with the attack. I've recently added ImageNet support, but have yet to replicate results on larger images as seen in the paper. I'd love to see some contributions. See the description of this video for a link (or search on GitHub).

Able Reason - 2018-04-02

Oh, just left the same comment, gonna delete lt.

Smith Smith - 2018-04-11

In addition to this, the background of the image doesn't really count. If you're using a 32*32 pixel image, and only like 1/4 of those pixels are the horse (which I probably wouldn't have guessed either without the caption tbh) 1 pixel is very significant.

Hyperparticle - 2018-03-31

I published an (unofficial) Keras implementation where you can experiment with the attack on GitHub. See the description of this video for a link (or search one-pixel-attack-keras). I'd love to see some contributions for further exploration.

patrink12 - 2018-03-31

I think this is a bit misleading as most of the images in the paper are 32x32 pixels, so 1 pixel difference is pretty large. It is 1/1024 of the image being noise (around 0.1%).

For comparison, a 1080p image would have 2025 pixels of noise, and a 4k image would have 8100 pixels. While this is small, the paper should have been not been called "one pixel attack".

Z3U5 - 2018-03-31

Neurals Networks are still not intelligent accept it

jon ho - 2018-04-01

check out their whitepaper, they tested it on several ImageNet dataset, which I believe are generally 256x256 pixels

splitframe - 2018-04-30

patrink12: My thoughts exactly. It's a little sensational and just as easily could have been called the 1 permille attack.

David Carlson - 2018-06-06

ZEUS MYLIFE_2.0 of course they aren't. They are simply high dimensional curves

Logan Revesby - 2018-06-13

ZEUS MYLIFE_2.0 They still don't learn, either.

ammarkov - 2018-03-31

it is very low resolution so a single pixel has a much bigger impact..

nickt - 2018-04-01

those pictures are so low-res even humans will have difficulty identifying them. furthermore, since there are so few pixels in the picture, the one special pixel can be easily spotted

Moby Motion - 2018-03-30

Can't wait for the video on adversarial examples in human vision! In artificial neural nets, adversarial examples tend to work reasonably well on any architecture trained with the same data. I wonder if the human examples you're talking about also work on everyone - like optical illusions

Mi 28 - 2018-04-07

Well it's a little different. Computer neural network is basically a huge mathematical matrix. You turn input into a vector form, multiply it by the matrix, and you get some output vector (possibly 1-dimensional, a single number). Network training is simply adjusting the matrix values until it produces right output vector for a given input vector. What exactly is going on during that multiplication is pretty obscure, all of the matrix values are arbitrary and opaque, they have no meaning. So there could be (and in practice is) a collection of these rogue matrix values that usually don't do anything but which you can leverage to produce an entirely different output for marginally different input. Visual pathway is more complicated than that. First of all it has no pixels and eyes are moving about constantly too. Then there's a whole bunch of features detecting neurons which then goes through data compression neurons. Finally in the brain there's data decompression and interpretation in an enormous grid of neurons. The system is exceedingly robust in fact. Your ability to tell what color something is regardless of its illumination (and therefore absolute color) is a testament to that. Ditto with ability to tell object size and distance with respect to perspective distortion. Most of the optical illusions aren't fooling your eyes, in fact they do the opposite and highlight that your eyes cannot be fooled easily. So you can't just put a black dot on a banana to fool people into thinking it's a bicyclist.

statorworks 345 - 2018-04-09

Mi 28 Exactly

KuraIthys - 2018-04-10

Actually some of the current state of the art in artificial neural networks for image recognition (I forget what name is given to this kind of network) are capable of reconstructing what the network 'sees' at each layer and for each part of the net.

What you tend to see is that in each layer you get a bunch of classifiers for certain smaller scale features, and then in the next layer you get things which are compositions of those smaller scale features into larger features, until eventually you get object classifications.

I used to think you couldn't possibly know what a neural net was processing, but then I saw one of these in action and (for this type of network at at least), turns out it's actually quite possible to construct some output that gives a good and fairly easy to understand sense of what any given network is actually using to make 'decisions' with.

It really was something I never expected to see.
I'd always assumed neural nets, no matter the type would remain fully opaque.
Turns out that's not necessarily true.

Michael Lohmann - 2018-05-07

Oh, and one additional point: The human sensors are kind of crap and always have noise. So you basically train your network with a lot of noise which makes it more robust against such attacks.

Rachel Slur - 2019-02-08

Google could test it on humans by using a I'm not a robot. thingie.

starrychloe - 2018-04-01

Can't wait to defeat the singularity with my bad grammar!

theshuman100 - 2018-04-01

Who will win?
1000+ hours of conputational learning vs one stray boi

Tanmay Garg - 2018-04-03

That's very easy to fix. Another network performs these attacks and trains the AI again to give the same confidence level.

Jim Steinbrecher - 2018-05-07

the point of the paper is that most image recognition systems dont do that, and are therefore vulnerable to this type of attack. they are so eager to present a result that they present a lot of false positives and are easy to trick.

Matthew Ames - 2018-03-31

That was surprising! I didn't know that image classifiers were so unstable. Looking forward to the next one!

WeAreSoPredictable - 2018-04-09

Our brain is one patter-recognition machine, and it's extremely unreliable. It only takes some minor visual cues to throw our perception right off, so I guess it should hardly be surprising that a rudimentary version like a Neural Net would be highly susceptible to being tricked into mis-recognising things.

Lord of the Pies - 2018-03-31

this highlights the lack of understanding these nets have

Peter Petrov - 2018-05-05

I think I see Jesus! Oh wait, it might be just the coffee.

Lordious - 2018-03-31

Humans get fooled by optical illusions way more often than machines.
They were designed to fool humans after all.

Joe Ch - 2018-03-31

Lordious If you are fooled by only one pixel to believe that a dog is an airplane, I am sorry to tell, but your parents might have lied to you: you are an AI.

gifrancis - 2018-04-01

"Parents" ;)

El Jona - 2018-04-04

Yes but I don’t know any optical illusion that have made a cop to kill a person he though it was a robber. #AIcops

Jim Steinbrecher - 2018-05-07

if you define "optical illusion" as "something that confuses the visual system of most humans", then by definition it will confuse the visual system of most humans. but something that confuses machine vision into thinking it is seeing something else is also an optical illusion. and machine vision is much, much easier to fool than human vision (or the vision of most animals).

so, for any definition of "optical illusion" applicable to both humans and machines, your statement is false.

Gustav Kuriga - 2018-05-12

Joe Ch one pixel in a 32x32 image. I mean some of them were such low resolution that I wouldn't have known what was in them at all until he said it.

Andrew Kerr - 2018-06-01

Could you backprop to determine how to change a pixel, instead of with a random-based search?

Cambesa - 2018-03-31

I still miss the old "See you next time"

William Britton - 2019-11-03

Your catchphrase ”hold on to your papers" makes me smile every time.

Jonathan Crowder - 2018-04-15

Just discovered you, got a sub!

Astrocookie - 2019-11-04

Imagine using an AI to determine the best way to mislead other AIs

Blubber Kumpel - 2018-04-08

i imagine a realtime argumented reality feature which makes my room look clean and tidy

timmahtown - 2019-03-04

1:32 Amusingly almost anything can be seen as an aeroplane

Manfred Ferrucci - 2018-06-07

Wow this is like the AI equivalent to Bruce Lee's one inch punch

OrangeC7 - 2019-04-24

Woah woah woah wait a hot second that date is suspicious

Jacob Lee - 2019-02-24

Great stuff as always, Károly :)

Quenz - 2018-04-12

Wow, too loud man!

WIM42GNU - 2018-03-31

Wow only one pixel. Maybe an additional network which tries to fool the original network is needed.

WIM42GNU - 2018-03-31

Exactly, that is what I had in mind.

animowany111 - 2018-03-31

Modern networks do exactly that, sometimes on a deeper level also.
It's common to apply random noise to the input and also between neural net layers. It makes networks more robust to certain types of attacks (but definitely not all)
It's always possible to rotate, scale, skew or otherwise distort an image in a way the network has never seen.

The adversary always has it easier than the classification network, and that's even if the researchers bother to make the network robust. - they usually want to maximize accuracy scores instead.

WIM42GNU - 2018-03-31

Even if you train with random noise the networks will still not be robust against these pixel attacks. If you inject a pixel that will bring the neural network to detect a pattern that it learned. The neural network will be confused. The network needs to learn the semantic correct patterns. As you noted random noise will help but only to some degree. Imo that can only be archived when the neural network unlearns the wrong patterns. So the network must fail and relearn. The space of wrong predictions can be searched by a neural network that learns to do just that. Like a GAN setup would.

I would guess that with that the number of pixels you need to manipulate to fool a neural network will go up. Over fitting might be a problem thou. Its an challenging problem to work on.

As on maximizing the accuracy it is not more important than the generalisably.

One could also build a neural network that tries to distinguish be manipulated and original training data. To filter out these pictures. The more fake it seems the less suited the network for the actual prediction is.

UserHuge - 2018-04-03

WIM42GNU What do you mean by maximizing "the generalizably"?

WIM42GNU - 2018-04-03

That the network learns the general features instead of memorizing local patterns.
Therefore that the network learns the general idea and not fits to the training data.
(You can get a high accuracy by just memorizing the data. Over fitting)

Steven Ashton Baker - 2019-01-16

to be fair the clue was in the title :)

Rickey Bowers - 2018-04-02

I quickly began questioning the robustness of my own wet ware.

SniperNinja-115 - 2018-04-22

very nice/cool stuff, much love, keep it up when/if you want, everyone, and take care:))black_heartx2*..

Socks With Sandals - 2019-11-08

Ironic that the one experience we are compelled to obverse every waking second - our perception of the world through our senses, is the one phenomenon we cannot describe to computers.

Rishi Raj Jain - 2019-11-14

Awesome!

=NolePtr - 2018-04-30

Google thinks my dog is a cat. It's hilarious.

Jake White - 2018-04-03

neural networks would have to generate 3d representations in order to defeat these types of attacks.

user255 - 2018-05-07

One pixel from hundred(?) pixel image... not that amazing.

bred - 2018-04-21

Can’t blame it, at that resolution I can’t even tell what some of those images are

Paul Amblard - 2019-11-19

Can we say that it is a type of overfitting ?

William Dye - 2018-03-31

Could a one-pixel image glitch cause a self-driving car to kill a pedestrian? The research in this video should be very helpful in answering that question. While the examples were generated in an adversarial fashion using "white box" knowledge of the image recognizer, the one-pixel changes could happen in the field by random chance. By accumulating more examples of images that will fool a given recognizer, we can get a much better estimate of how often that recognizer would fail in the real world due to non-adversarial random chance.

Able Reason - 2018-04-02

The thing is we trained a system to fool another system whow knows how many attempts (different positions and color changes) it took in order to fool the system.

Also this won't work well with moving images, also the resolution.

So, assuming a reasonable number of attamps necessary for this, the probability is tiny!

(Also i suggest to use at least 2 cameras together for every ai system which makes this exponentially more difficult or

SirCutRy - 2018-04-05

Slight Street Sign Modifications Can Completely Fool Machine Learning Algorithms
(IRL)
https://spectrum.ieee.org/cars-that-think/transportation/sensors/slight-street-sign-modifications-can-fool-machine-learning-algorithms

Can't these modification attacks be protected against by adding noise to training and/or recognition? The model should be able to do this anyway with proper generalization. You get more labeled data just by modifying existing data.

Okay - 2018-04-12

William Dye the majority of self-driving cars (or even all of them) are not based on neural networks, so no.

SirCutRy - 2018-04-12

WofWca
Can you elaborate on the technology they use?

Michael Harrington - 2018-10-21

Its unlikely that the attack would hold over different perspectives, temporal continuity would likely thwart these. But that's a hypothesis, not research.

ProCactus - 2018-04-02

Excellent, More needs to be done on this. We need to know skynets weaknesses before we put it online.

Simra Afreen - 2019-03-17

Where can I see the implementation of the approach? The link to the paper gives a Github link which is an empty repository...

Sebastian Rodriguez Colina - 2018-03-31

How were the networks trained? Was it trained using data augmentation to cover that type of attacks?

Abhishek S - 2018-04-03

Does these adversarial attacks apply only to images

CGMan - 2020-01-29

1:40 the bottom pictures are good nightmare fuel

Shobhit Sundriyal - 2018-09-09

Blown me Away

Hdye Hdhde - 2020-01-13

the power of knowing something, is i was studying neural networks just for the last 2 weeks and some of this things make much more sense now compared to when i started.

voidpointer1 - 2019-10-26

I finally understand why Chris Roberts wanted that one pixel to be blue instead of green.

Goku - 2018-03-31

Now we need a neural network trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks trained for fighting against neural networks...

WIM42GNU - 2018-03-31

I slightly doubt that. :)

id523a - 2018-04-03

This is how generative adversarial networks work.

Andreas V - 2018-04-04

id523a exactly!

Jose Viana - 2018-05-04

can it be done to facial recognition ??