183 research outputs found
A Broad Spectrum Defense Against Adversarial Examples
Machine learning models are increasingly employed in making critical decisions across a wide array of applications. As our dependence on these models increases, it is vital to recognize their vulnerability to malicious attacks from determined adversaries. In response to these adversarial attacks, new defensive mechanisms have been developed to ensure the security of machine learning models and the accuracy of the decisions they make. However, many of these mechanisms are reactionary, designed to defend specific models against a known specific attack or family of attacks. This reactionary approach does not generalize to future yet to be developed attacks. In this work, we developed Broad Spectrum Defense (BSD) as a defensive mechanism to secure any model against a wide range of attacks. BSD is not reactionary, and unlike most other approaches, it does not train its detectors using adversarial data, hence removing an inherent bias present in other defenses that rely on having access to adversarial data. An extensive set of experiments showed that BSD outperforms existing detector-based methods such as MagNet and Feature Squeezing. We believe BSD will inspire a new direction in adversarial machine learning to create a robust defense capable of generalizing to existing and future attacks
DeepSearch: A Simple and Effective Blackbox Attack for Deep Neural Networks
Although deep neural networks have been very successful in
image-classification tasks, they are prone to adversarial attacks. To generate
adversarial inputs, there has emerged a wide variety of techniques, such as
black- and whitebox attacks for neural networks. In this paper, we present
DeepSearch, a novel fuzzing-based, query-efficient, blackbox attack for image
classifiers. Despite its simplicity, DeepSearch is shown to be more effective
in finding adversarial inputs than state-of-the-art blackbox approaches.
DeepSearch is additionally able to generate the most subtle adversarial inputs
in comparison to these approaches
A New Ensemble Adversarial Attack Powered by Long-term Gradient Memories
Deep neural networks are vulnerable to adversarial attacks.Comment: Accepted by AAAI202
- …