> stats-data-ia > imagerie > adversarial-examples-that-fool-both-human-and-computer-vision-two-minute-papers-241

This Fools Your Vision | Two Minute Papers #241

Two Minute Papers - 2018-04-05

The paper "Adversarial Examples that Fool both Human and Computer Vision" is available here:
https://arxiv.org/abs/1802.08195

Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers

We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil.
https://www.patreon.com/TwoMinutePapers

One-time payment links are available below. Thank you very much for your generous support!
PayPal: https://www.paypal.me/TwoMinutePapers
Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A
LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg

Thumbnail background image credit: https://pixabay.com/photo-2479948/
Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Facebook: https://www.facebook.com/TwoMinutePapers/
Twitter: https://twitter.com/karoly_zsolnai
Web: https://cg.tuwien.ac.at/~zsolnai/

Two Minute Papers - 2018-04-05

Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers

One-time payment links are available below. Thank you very much for your generous support!
PayPal: https://www.paypal.me/TwoMinutePapers
Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A
LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg

Adam B - 2018-04-11

You say "the nose is longer and thicker" but if you actually measure it they are the same length and width. But I agree, somehow they appear longer and thicker even if they aren't. Mind=blown!

Adam B - 2018-04-11

It would be interesting to see an animated cross-fade between the two images...

Jannik Heidemann - 2018-04-17

Adam B The fooling AI faked light and shadow to pretend that thickness. Also it drew subtle spiraly noise onto the short cat fur to make it look like big curls which can only be formed by long hair. Long curled hair in turn is very uncommon with cats, makeing us recognize this as a dog too.

Mohamed Qasem - 2018-04-05

I was surprised that you showed one example only. Were there other examples that fool human vision or was this the only one?

gix10000 - 2018-04-05

Yeah that's disappointing, especially since I initially thought the left photo was an Alsatian. It looks horizontally squished at the outset

Corlin Palmer - 2018-04-06

The paper has 2 more examples but they're not very good. This cat/dog one is the only thing that tricked me. Considering it's just one image I'm not that convinced it's not an unusually good outlier.

Neoshaman Fulgurant - 2018-04-06

The dress anyone?

imlatinoguy - 2018-05-28

There is also the cat and dog one.

kendokaaa - 2019-05-08

@Neoshaman Fulgurant the dress was simply people not understanding how (bad) lighting and a bad camera can affect an image

SiaarZH - 2018-04-05

With 2x speed it’s 1minute papers.

Moby Motion - 2018-04-02

This is fascinating stuff, as usual. But it begs a question. This sort of attack seems to modify a greater number of pixels than previous methods - certainly more pixels than the one pixel attacks in the last video! But surely, if an image is modified enough, you could argue that what you are looking at is no longer truly a cat?

Once something is sufficiently different from the first picture, you can say that you're looking at an entirely new image, and who is to say what the ground truth "class" of this new image is? For example, GANs that generate new celebrity faces do so (essentially) by applying transformations to noise. No one would argue that those images truly are noise, modified enough to fool us into thinking they are faces.

You could argue that after a certain point, the sort of networks described in these videos are generating a new image, of what it would have looked like if it had been a dog in the picture.

manaquri - 2018-04-16

one metric we could use is the sum of the difference between the original pixel and the modified pixel, so a picture with a lot of changed pixels but only slightly would register the same as a picture with a few pixels that changed a lot.

Moby Motion - 2018-04-16

manaquri interesting. How would you decide on the cutoff? I.e between modification vs new image

manaquri - 2018-04-16

Use Amazon Mechanical Turk or a similar service to ask people that question, maybe?

Also, maybe the surrounding pixels should be taken into account, because a single pure white pixel in the middle of the dark looks odd, so maybe add a fraction of the difference between the value of the difference and it's neighbor to the score.

manaquri - 2018-04-17

a small delta indeed doesn't prove that it doesn't objectively looks like something else, but minimizing it should make the AI do the minimum of change to the picture while still fooling people/NN

Ronin - 2018-05-23

great initial point/question and subsequent replies Except for Max Loh, the grammar pedant.

Robert Szasz - 2018-04-06

Any image + carefully crafted "noise" = any other image. Isn't that a truism?

Oliver - 2018-04-30

The idea is to find the minimal modification that can make you see the image differently

Peter Smythe - 2018-05-05

The idea is using minimal noise that "shouldn't" work.

Peter Smythe - 2018-05-05

I.e. the best example is one pixel attacks.

Namidu Indunel - 2018-04-05

This doesn't look like anything to me.

Adrian - 2018-04-05

That's enough Namidu.

Diego Antonio Rosario Palomino - 2018-04-05

we will have to dissect you to know why

Peter Smythe - 2018-05-05

You need more training data.

LawrencelotNL - 2018-04-05

For some reason, I found this part of the paper very funny:
"To reduce subjects using strategies
based on overall color or brightness distinctions between
classes, we pre-filtered the dataset to remove images that
showed an obvious effect of this nature. Most significantly,
in the pets group we excluded images that included large
green lawns or fields, since in almost all cases these were
photographs of dogs."

Maj Smerkol - 2018-06-06

Choosing right data is very important. I heard of a neural network, trained to classify tanks into US tanks or Russian ones. Because of difference in climate, most of images had different background and it classified snowy images as Russian and green ones as US.

depi zixuri - 2018-04-05

I suspect that there are really ostriches there. But we humans, aren't smart enough to see them.

Robert Ralph - 2018-04-06

Or rather, that the algorithm we have simply cannot perceive them as such. But Your joke is good too :D

Rabbit Piet - 2018-04-05

I remember when this was a joke in another videos comment section, I can’t actually believe this is happening.

theshuman100 - 2018-04-06

Rabbit Piet the ultimate disguise. A single pixel

drdca - 2018-04-05

IMO, that cat (in the unedited photo) looks a little dog-ish. Still clearly a cat, but looks more like a dog than a typical cat does.
Or maybe it is because I have looked at the two images together for too long, and have come to associate the first one with dog-ish-ness as a result?
I suspect that where in the image the noise is applied could be trimmed away in order to make the image look less weird, while retaining the "looks like a dog" property?
For example, the colors on the wall.
One thing I notice is that it looks a bit like the whiskers are blending in with the patterns that are added to the body. Perhaps this results in making it look more like a dog? If so, suppressing the noise on the body might result in the illusion being less effective...

Troy Bradley - 2018-06-06

I agree. This is not nearly as significant as the previous video with the "One Pixel" attack. And there's no way this attack could make a bus look like an ostrich.

CGMan - 2020-01-29

I thought the first image is already the result. I first saw it as a strange dog that was a cat before

Diego Antonio Rosario Palomino - 2018-04-05

i want a refund on my
vision processing neural network

George Gach - 2018-04-05

Unfortunately, Mother Nature Inc. has the monopoly in that area, you can't get anything else really. Maybe someday we will end her totalitarian control using an integrated silicon visual cortex but that's a long way in future.

London England - 2018-04-05

In the future, the evil AI robots will mask themselves as cute anime cats using this technique. The Resistance will not stand a chance.

Kobriks1 - 2018-04-05

How is this impressive exactly? Images are very heavily modified. Calling it 'noise' doesn't change a thing.

Ron Netgrazer - 2018-04-05

You're right, in the sense that it is no more or less noise than the activation patterns used to fool the artificial vision systems. In fact, true noise has a stabilizing effect in many situations.

harsh nigam - 2018-04-05

i dont understand the novelty here , it has basically changed the nose shape to make it look like a dog and the change is substantial enough to say its an entirely new image, its like style transferring bw cat and dog. It is not in anyway fooling our vision at all rather changing the image just enough to make it look like something else.

DevinDTV - 2018-04-06

Yeah there's really no trickery here, they just photo edited a cat to look like a dog by using selective noise that happens to make the cat look like a dog. Nothing special really.

Nader S. - 2018-04-06

To me it looks like the same shape. I think it just has more details.

leftaroundabout - 2018-04-22

“it has basically changed the nose shape to make it look like a dog” – you'd think so. So did I.

But that's actually the scary thing, because a look at the diff between the two images shows that it has *not* changed the nose shape. The noise by itself looks almost completely random; the only thing that's imprinted quite discernible in it are the ears. It hasn't changed the shape of the nose but tricked our visual perception into thinking the nose is bigger than it actually is.

leftaroundabout - 2018-04-23

Quite sure several artificial neural networks would not have been tricked by this. But that's not to say these AIs would be better at classifying dogs vs cats than humans in general – they would probably be vulnerable to other adversarial attacks. It will always be possible to trick any (inexact, which is what the world is) classifier into preferring a wrong interpretation, that's not in itself surprising at all.

What I mean by scary is that this attack has managed to tweak the image in a way which doesn't ring alarm bells in a human: the tweak itself (the added noise) looks completely innocuous.

Peter Smythe - 2018-05-05

Sounds like the adversarial example worked but ok.

quadstuffed - 2018-04-05

i just wanted to say thats the most dog looking cat ive seen in a while

TacoDude314 - 2018-04-05

It's cool, I'm disappointed that the paper only gave one convincing example.

Elorram Basdo - 2018-04-10

they took a picture, squashed it into half width, then smudged the nose. This is more cheating than it is exploitation.

animowany111 - 2018-04-13

Well actually it didn't only change the nose, the nose is a really minor change that by itself wouldn't do anything.
The important part is changing the shape of the head. If you look between the ears on both images, you'll notice they recolored part of the cat's head to look like carpet, which makes the ears longer and the head more dog-like.

Domain of Science - 2018-04-06

The cat image was compressed horizontally to make it thinner and more dog like before the noise was applied.

MyLife - 2018-05-31

Domain of Science

exactly, a bit of an exaggeration in terms of how he's reacting to the image on the right. There are many, many optical and perception illusions which truly show you the flaws in your visual system, and that truly surprise people. This isn't one of those.

sidneo14 - 2018-04-05

Adversarial attacks have been carried on human minds for centuries by artists, nature, illusionist , street magicians and war generals.

Tissuebox - 2018-04-14

ya but not AI

sidneo14 - 2018-04-14

i would love to see an "AI" not trained for the task adopt it as a strategy for now this falls back to the above categories "fancy Photoshop". impressive work nonetheless

Kaida Tong - 2018-09-01

IIRC in Worm (web serial), Contessa was attacked by Mantellum and made some debris look like her corpse.

Simone - 2018-04-05

For us who know how the human vision works, this is fairly easy to understand whats going on. The interesting part here is how an AI is slowly figuring out what works on us.

Sanford Keith - 2018-09-01

"The interesting part here is how an AI is slowly figuring out what works on us."
AI not working alone.

Rozae Pareza - 2018-04-06

Just wanted to draw attention to this mildly terrifying part of the paper:

"The development of machine learning models which can
generate fake images, audio, and video which appears real-
istic is already a source of acute concern (Farell & Perlstein,
2018). Adversarial examples provide one more way in
which machine learning might plausibly be used to subtly
manipulate humans. For instance, an ensemble of deep
models might be trained on human ratings of face trustwor-
thiness. It might then be possible to generate adversarial
perturbations which enhance or reduce human impressions
of trustworthiness, and those perturbed images might be
used in news reports or political advertising."

Marc B. Poblet - 2018-04-07

It's quite ironic that it's an image of a cat, an animal which has evolved to fool our neural network into feeding and pampering it as if it were a human child... maybe this research will be the key to finally freeing ourselves from our feline overlords!
Edit: Tried drawing noise on my cat with a marker so I wouldn't recognise it as a cat; didn't work, got scratched, will need more samples, some disinfectant, and a research grant.
And more cats.

Peter Smythe - 2018-05-05

Marc B. Poblet cats also evolved to kill pests 4 us so we feed them.

Illuminated - 2018-04-06

In Soviet Russia, adversarial neural network trains YOU!

Colopty - 2018-04-05

Now we just need to make a building look like an ostrich.

nickt - 2018-04-06

but does this work with better resolution?

Seth Nuzum - 2018-04-05

Wow, fantastic channel my man! Love your highly-descriptive content

Sanjit Daniel - 2018-04-06

I am holding on to my papers. Great stuff Karoly, keep it coming !

Flo Wmo - 2018-05-15

Haha dude this is getting so out of hand. It's so crazy, oh my God. My brain just got hacked by an ai. Lol

jrcowboy1099 - 2018-04-05

You spoiled this for me last video in the comments and I went and read the paper... no less awesome, though. As soon as I saw the catdog photo, I was hooked.

Tori Ko - 2018-04-05

Really amazing

Adrian - 2018-04-05

1:04 Megabat!

Felix Kütt - 2018-04-05

I see it. Huh.

Matthew Taylor - 2018-04-06

I'm a bit suspicious about this result. I could be wrong, but from what I can tell from the paper, their "noise model" is of the same dimension as the image, so it's not really noise. Rather, it is a vector which can move the original image in an arbitrary direction in high dimensional image space. No surprise then, that the adversarial "noise" moves the suspiciously squashed cat image in question towards the closest image in another class. I bet both of these images are support vectors, at the boundaries of their respective classes, so the intermediate image will inevitably be a nonlinear morph between them.

Kim Schmider - 2018-05-21

No f-ing way! I was speulating whether this was possible and I'm now just super happy that it is!

Danilo Margarido - 2018-04-06

But then again, it's a low-resolution image, which I always consider fishy.

Cody Battery - 2018-04-22

Could this technique be used to reverse engineer the human brain?

Jannik Heidemann - 2018-04-19

Step 1: Make this a Instagramfilter.

Step 2: Name it "Doggy Style".

Emrah E - 2018-04-05

Yok artık Lebron James...

Unbalanced Binary Tree - 2018-04-05

?

Emrah E - 2018-04-05

This phrase had been said many years ago, during a basketball game, meaning: "You can't be serious, Lebron James?!" (probably after an amazing move of Lebron James)
And became a very popular slogan in Turkey, when people feel quite surprised. There are also similar sayings, such as:
"Yok ebesinin amı Ali Sami"
"Messi, bu adam neyin nessi?"


You can check the commercial, where you can hear the same sentence (in Turkish of-course)
https://www.youtube.com/watch?v=6pLrfq8CHQI

Hope this makes things clear

Yndostrui - 2018-04-13

I actually thought the picture on the left was a dog at first, so I'm not very surprised you can get fooled by simply modifying it a bit more.
The picture is very dog-like to begin with.

Invalid571 - 2018-06-03

Next lvl of camouflage with ai

floyd nelson - 2018-04-13

cats already look similar to dogs. I'd like to see a more profound change in classification

Luciano Dato - 2018-04-05

Wow just wow!

Felix Kütt - 2018-04-05

Wowee!

smuecke - 2018-04-06

I can imagine this has the potential to produce some creepy images, like a picture of a face modified just to the verge of being human = Uncanny Valley.

Erik S - 2018-04-05

what in the hell?!

Mr.sunflower - 2018-04-06

XD

DunnickFayuro - 2019-02-06

Try to make me take a truck for a cat and I will be impressed. Using a stretched picture of a cat that kinda already looks like a dog to fool me is too easy.

Ahri OTP - 2018-04-16

to be fair the cat looked a bit like a dog already on the before picture

Jan Kleine Deters - 2018-04-13

Feature skewing for human vision aka makeup

Anastasia Dunbar - 2018-04-06

Drug simulation.

Hagen von Eitzen - 2018-09-05

8O

Iaroslav Shcherbatyi - 2018-04-08

3 Minute 51 Seconds On Average Papers

SirCutRy - 2018-04-06

I don't see any practical application for this. Maybe when using AR you could do something, but that isn't something you would need.