xxx
"First, as we saw above, it’s easy to attain high confidence in the incorrect classification of an adversarial example — recall that in the first ‘panda’ example we looked at, the network is less sure of an actual image looking like a panda (57.7%) than our adversarial example on the right looking like a gibbon (99.3%). Another intriguing point is how imperceptibly little noise we needed to add to fool the system — after all, clearly, the added noise is not enough to fool us, the humans."
From "Breaking neural networks with adversarial attacks - Towards Data Science".
xxx
Comments
Post a Comment