Article thumbnail

DARCCC: Detecting Adversaries by Reconstruction from Class Conditional Capsules

By Nicholas Frosst, Sara Sabour and Geoffrey Hinton

Abstract

We present a simple technique that allows capsule models to detect adversarial images. In addition to being trained to classify images, the capsule model is trained to reconstruct the images from the pose parameters and identity of the correct top-level capsule. Adversarial images do not look like a typical member of the predicted class and they have much larger reconstruction errors when the reconstruction is produced from the top-level capsule for that class. We show that setting a threshold on the $l2$ distance between the input image and its reconstruction from the winning capsule is very effective at detecting adversarial images for three different datasets. The same technique works quite well for CNNs that have been trained to reconstruct the image from all or part of the last hidden layer before the softmax. We then explore a stronger, white-box attack that takes the reconstruction error into account. This attack is able to fool our detection technique but in order to make the model change its prediction to another class, the attack must typically make the "adversarial" image resemble images of the other class.Comment: To be presented at NIPS 2018 Workshop on Security in Machine Learnin

Topics: Computer Science - Machine Learning, Computer Science - Cryptography and Security, Computer Science - Computer Vision and Pattern Recognition, Statistics - Machine Learning
Year: 2018
OAI identifier: oai:arXiv.org:1811.06969

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.

Suggested articles