5,154 research outputs found
Disentangling Adversarial Robustness and Generalization
Obtaining deep networks that are robust against adversarial examples and
generalize well is an open problem. A recent hypothesis even states that both
robust and accurate models are impossible, i.e., adversarial robustness and
generalization are conflicting goals. In an effort to clarify the relationship
between robustness and generalization, we assume an underlying, low-dimensional
data manifold and show that: 1. regular adversarial examples leave the
manifold; 2. adversarial examples constrained to the manifold, i.e.,
on-manifold adversarial examples, exist; 3. on-manifold adversarial examples
are generalization errors, and on-manifold adversarial training boosts
generalization; 4. regular robustness and generalization are not necessarily
contradicting goals. These assumptions imply that both robust and accurate
models are possible. However, different models (architectures, training
strategies etc.) can exhibit different robustness and generalization
characteristics. To confirm our claims, we present extensive experiments on
synthetic data (with known manifold) as well as on EMNIST, Fashion-MNIST and
CelebA.Comment: Conference on Computer Vision and Pattern Recognition 201
Efficient Two-Step Adversarial Defense for Deep Neural Networks
In recent years, deep neural networks have demonstrated outstanding
performance in many machine learning tasks. However, researchers have
discovered that these state-of-the-art models are vulnerable to adversarial
examples: legitimate examples added by small perturbations which are
unnoticeable to human eyes. Adversarial training, which augments the training
data with adversarial examples during the training process, is a well known
defense to improve the robustness of the model against adversarial attacks.
However, this robustness is only effective to the same attack method used for
adversarial training. Madry et al.(2017) suggest that effectiveness of
iterative multi-step adversarial attacks and particularly that projected
gradient descent (PGD) may be considered the universal first order adversary
and applying the adversarial training with PGD implies resistance against many
other first order attacks. However, the computational cost of the adversarial
training with PGD and other multi-step adversarial examples is much higher than
that of the adversarial training with other simpler attack techniques. In this
paper, we show how strong adversarial examples can be generated only at a cost
similar to that of two runs of the fast gradient sign method (FGSM), allowing
defense against adversarial attacks with a robustness level comparable to that
of the adversarial training with multi-step adversarial examples. We
empirically demonstrate the effectiveness of the proposed two-step defense
approach against different attack methods and its improvements over existing
defense strategies.Comment: 12 page
- …