49 research outputs found
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
We investigate conditions under which test statistics exist that can reliably
detect examples, which have been adversarially manipulated in a white-box
attack. These statistics can be easily computed and calibrated by randomly
corrupting inputs. They exploit certain anomalies that adversarial attacks
introduce, in particular if they follow the paradigm of choosing perturbations
optimally under p-norm constraints. Access to the log-odds is the only
requirement to defend models. We justify our approach empirically, but also
provide conditions under which detectability via the suggested test statistics
is guaranteed to be effective. In our experiments, we show that it is even
possible to correct test time predictions for adversarial attacks with high
accuracy
Beware the Black-Box: on the Robustness of Recent Defenses to Adversarial Examples
Many defenses have recently been proposed at venues like NIPS, ICML, ICLR and
CVPR. These defenses are mainly focused on mitigating white-box attacks. They
do not properly examine black-box attacks. In this paper, we expand upon the
analysis of these defenses to include adaptive black-box adversaries. Our
evaluation is done on nine defenses including Barrage of Random Transforms,
ComDefend, Ensemble Diversity, Feature Distillation, The Odds are Odd, Error
Correcting Codes, Distribution Classifier Defense, K-Winner Take All and Buffer
Zones. Our investigation is done using two black-box adversarial models and six
widely studied adversarial attacks for CIFAR-10 and Fashion-MNIST datasets. Our
analyses show most recent defenses (7 out of 9) provide only marginal
improvements in security (), as compared to undefended networks. For
every defense, we also show the relationship between the amount of data the
adversary has at their disposal, and the effectiveness of adaptive black-box
attacks. Overall, our results paint a clear picture: defenses need both
thorough white-box and black-box analyses to be considered secure. We provide
this large scale study and analyses to motivate the field to move towards the
development of more robust black-box defenses
Masked Language Model Based Textual Adversarial Example Detection
Adversarial attacks are a serious threat to the reliable deployment of
machine learning models in safety-critical applications. They can misguide
current models to predict incorrectly by slightly modifying the inputs.
Recently, substantial work has shown that adversarial examples tend to deviate
from the underlying data manifold of normal examples, whereas pre-trained
masked language models can fit the manifold of normal NLP data. To explore how
to use the masked language model in adversarial detection, we propose a novel
textual adversarial example detection method, namely Masked Language
Model-based Detection (MLMD), which can produce clearly distinguishable signals
between normal examples and adversarial examples by exploring the changes in
manifolds induced by the masked language model. MLMD features a plug and play
usage (i.e., no need to retrain the victim model) for adversarial defense and
it is agnostic to classification tasks, victim model's architectures, and
to-be-defended attack methods. We evaluate MLMD on various benchmark textual
datasets, widely studied machine learning models, and state-of-the-art (SOTA)
adversarial attacks (in total settings). Experimental results show
that MLMD can achieve strong performance, with detection accuracy up to 0.984,
0.967, and 0.901 on AG-NEWS, IMDB, and SST-2 datasets, respectively.
Additionally, MLMD is superior, or at least comparable to, the SOTA detection
defenses in detection accuracy and F1 score. Among many defenses based on the
off-manifold assumption of adversarial examples, this work offers a new angle
for capturing the manifold change. The code for this work is openly accessible
at \url{https://github.com/mlmddetection/MLMDdetection}.Comment: 13 pages,3 figure
Bit Error Robustness for Energy-Efficient {DNN} Accelerators
Deep neural network (DNN) accelerators received considerable attention in past years due to saved energy compared to mainstream hardware. Low-voltage operation of DNN accelerators allows to further reduce energy consumption significantly, however, causes bit-level failures in the memory storing the quantized DNN weights. In this paper, we show that a combination of robust fixed-point quantization, weight clipping, and random bit error training (RandBET) improves robustness against random bit errors in (quantized) DNN weights significantly. This leads to high energy savings from both low-voltage operation as well as low-precision quantization. Our approach generalizes across operating voltages and accelerators, as demonstrated on bit errors from profiled SRAM arrays. We also discuss why weight clipping alone is already a quite effective way to achieve robustness against bit errors. Moreover, we specifically discuss the involved trade-offs regarding accuracy, robustness and precision: Without losing more than 1% in accuracy compared to a normally trained 8-bit DNN, we can reduce energy consumption on CIFAR-10 by 20%. Higher energy savings of, e.g., 30%, are possible at the cost of 2.5% accuracy, even for 4-bit DNNs