2,929 research outputs found
Lower bounds on the robustness to adversarial perturbations
The input-output mappings learned by state-of-the-art neural networks are significantly discontinuous. It is possible to cause a neural network used for image recognition to misclassify its input by applying very specific, hardly perceptible perturbations to the input, called adversarial perturbations. Many hypotheses have been proposed to explain the existence of these peculiar samples as well as several methods to mitigate them, but a proven explanation remains elusive. In this work, we take steps towards a formal characterization of adversarial perturbations by deriving lower bounds on the magnitudes of perturbations necessary to change the classification of neural networks. The proposed bounds can be computed efficiently, requiring time at most linear in the number of parameters and hyperparameters of the model for any given sample. This makes them suitable for use in model selection, when one wishes to find out which of several proposed classifiers is most robust to adversarial perturbations. They may also be used as a basis for developing techniques to increase the robustness of classifiers, since they enjoy the theoretical guarantee that no adversarial perturbation could possibly be any smaller than the quantities provided by the bounds. We experimentally verify the bounds on the MNIST and CIFAR-10 data sets and find no violations. Additionally, the experimental results suggest that very small adversarial perturbations may occur with non-zero probability on natural samples
Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
Deep Neural Network-based systems are now the state-of-the-art in many
robotics tasks, but their application in safety-critical domains remains
dangerous without formal guarantees on network robustness. Small perturbations
to sensor inputs (from noise or adversarial examples) are often enough to
change network-based decisions, which was recently shown to cause an autonomous
vehicle to swerve into another lane. In light of these dangers, numerous
algorithms have been developed as defensive mechanisms from these adversarial
inputs, some of which provide formal robustness guarantees or certificates.
This work leverages research on certified adversarial robustness to develop an
online certifiably robust for deep reinforcement learning algorithms. The
proposed defense computes guaranteed lower bounds on state-action values during
execution to identify and choose a robust action under a worst-case deviation
in input space due to possible adversaries or noise. Moreover, the resulting
policy comes with a certificate of solution quality, even though the true state
and optimal action are unknown to the certifier due to the perturbations. The
approach is demonstrated on a Deep Q-Network policy and is shown to increase
robustness to noise and adversaries in pedestrian collision avoidance scenarios
and a classic control task. This work extends one of our prior works with new
performance guarantees, extensions to other RL algorithms, expanded results
aggregated across more scenarios, an extension into scenarios with adversarial
behavior, comparisons with a more computationally expensive method, and
visualizations that provide intuition about the robustness algorithm.Comment: arXiv admin note: text overlap with arXiv:1910.1290
Analysis of classifiers' robustness to adversarial perturbations
The goal of this paper is to analyze an intriguing phenomenon recently
discovered in deep networks, namely their instability to adversarial
perturbations (Szegedy et. al., 2014). We provide a theoretical framework for
analyzing the robustness of classifiers to adversarial perturbations, and show
fundamental upper bounds on the robustness of classifiers. Specifically, we
establish a general upper bound on the robustness of classifiers to adversarial
perturbations, and then illustrate the obtained upper bound on the families of
linear and quadratic classifiers. In both cases, our upper bound depends on a
distinguishability measure that captures the notion of difficulty of the
classification task. Our results for both classes imply that in tasks involving
small distinguishability, no classifier in the considered set will be robust to
adversarial perturbations, even if a good accuracy is achieved. Our theoretical
framework moreover suggests that the phenomenon of adversarial instability is
due to the low flexibility of classifiers, compared to the difficulty of the
classification task (captured by the distinguishability). Moreover, we show the
existence of a clear distinction between the robustness of a classifier to
random noise and its robustness to adversarial perturbations. Specifically, the
former is shown to be larger than the latter by a factor that is proportional
to \sqrt{d} (with d being the signal dimension) for linear classifiers. This
result gives a theoretical explanation for the discrepancy between the two
robustness properties in high dimensional problems, which was empirically
observed in the context of neural networks. To the best of our knowledge, our
results provide the first theoretical work that addresses the phenomenon of
adversarial instability recently observed for deep networks. Our analysis is
complemented by experimental results on controlled and real-world data
- …