1,217 research outputs found
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense
Present attack methods can make state-of-the-art classification systems based
on deep neural networks misclassify every adversarially modified test example.
The design of general defense strategies against a wide range of such attacks
still remains a challenging problem. In this paper, we draw inspiration from
the fields of cybersecurity and multi-agent systems and propose to leverage the
concept of Moving Target Defense (MTD) in designing a meta-defense for
'boosting' the robustness of an ensemble of deep neural networks (DNNs) for
visual classification tasks against such adversarial attacks. To classify an
input image, a trained network is picked randomly from this set of networks by
formulating the interaction between a Defender (who hosts the classification
networks) and their (Legitimate and Malicious) users as a Bayesian Stackelberg
Game (BSG). We empirically show that this approach, MTDeep, reduces
misclassification on perturbed images in various datasets such as MNIST,
FashionMNIST, and ImageNet while maintaining high classification accuracy on
legitimate test images. We then demonstrate that our framework, being the first
meta-defense technique, can be used in conjunction with any existing defense
mechanism to provide more resilience against adversarial attacks that can be
afforded by these defense mechanisms. Lastly, to quantify the increase in
robustness of an ensemble-based classification system when we use MTDeep, we
analyze the properties of a set of DNNs and introduce the concept of
differential immunity that formalizes the notion of attack transferability.Comment: Accepted to the Conference on Decision and Game Theory for Security
(GameSec), 201
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks
Deep Convolutional Networks (DCNs) have been shown to be vulnerable to
adversarial examples---perturbed inputs specifically designed to produce
intentional errors in the learning algorithms at test time. Existing
input-agnostic adversarial perturbations exhibit interesting visual patterns
that are currently unexplained. In this paper, we introduce a structured
approach for generating Universal Adversarial Perturbations (UAPs) with
procedural noise functions. Our approach unveils the systemic vulnerability of
popular DCN models like Inception v3 and YOLO v3, with single noise patterns
able to fool a model on up to 90% of the dataset. Procedural noise allows us to
generate a distribution of UAPs with high universal evasion rates using only a
few parameters. Additionally, we propose Bayesian optimization to efficiently
learn procedural noise parameters to construct inexpensive untargeted black-box
attacks. We demonstrate that it can achieve an average of less than 10 queries
per successful attack, a 100-fold improvement on existing methods. We further
motivate the use of input-agnostic defences to increase the stability of models
to adversarial perturbations. The universality of our attacks suggests that DCN
models may be sensitive to aggregations of low-level class-agnostic features.
These findings give insight on the nature of some universal adversarial
perturbations and how they could be generated in other applications.Comment: 16 pages, 10 figures. In Proceedings of the 2019 ACM SIGSAC
Conference on Computer and Communications Security (CCS '19
Defending Black-box Classifiers by Bayesian Boundary Correction
Classifiers based on deep neural networks have been recently challenged by
Adversarial Attack, where the widely existing vulnerability has invoked the
research in defending them from potential threats. Given a vulnerable
classifier, existing defense methods are mostly white-box and often require
re-training the victim under modified loss functions/training regimes. While
the model/data/training specifics of the victim are usually unavailable to the
user, re-training is unappealing, if not impossible for reasons such as limited
computational resources. To this end, we propose a new black-box defense
framework. It can turn any pre-trained classifier into a resilient one with
little knowledge of the model specifics. This is achieved by new joint Bayesian
treatments on the clean data, the adversarial examples and the classifier, for
maximizing their joint probability. It is further equipped with a new
post-train strategy which keeps the victim intact. We name our framework
Bayesian Boundary Correction (BBC). BBC is a general and flexible framework
that can easily adapt to different data types. We instantiate BBC for image
classification and skeleton-based human activity recognition, for both static
and dynamic data. Exhaustive evaluation shows that BBC has superior robustness
and can enhance robustness without severely hurting the clean accuracy,
compared with existing defense methods.Comment: arXiv admin note: text overlap with arXiv:2203.0471
- …