Skip to main content

Breaking neural networks with adversarial attacks - Towards Data Science

xxx

"First, as we saw above, it’s easy to attain high confidence in the incorrect classification of an adversarial example — recall that in the first ‘panda’ example we looked at, the network is less sure of an actual image looking like a panda (57.7%) than our adversarial example on the right looking like a gibbon (99.3%). Another intriguing point is how imperceptibly little noise we needed to add to fool the system — after all, clearly, the added noise is not enough to fool us, the humans."

From "Breaking neural networks with adversarial attacks - Towards Data Science".

xxx

Comments

Popular posts from this blog

We could fix mobile security, you know. We don't, but we could

Earlier in the week I blogged about mobile banking security , and I said that in design terms it is best to assume that the internet is in the hands of your enemies. In case you think I was exaggerating… The thieves also provided “free” wireless connections in public places to secretly mine users’ personal information. From Gone in minutes: Chinese cybertheft gangs mine smartphones for bank card data | South China Morning Post Personally, I always use an SSL VPN when connected by wifi (even at home!) but I doubt that most people would ever go to this trouble or take the time to configure a VPN and such like. Anyway, the point is that the internet isn’t secure. And actually SMS isn’t much better, which is why it shouldn’t really be used for securing anything as important as home banking. The report also described how gangs stole mobile security codes – which banks automatically send to card holders’ registered mobile phones to verify online transactions – by using either a Trojan...