607 research outputs found
Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers
In this paper, we present a black-box attack against API call based machine
learning malware classifiers, focusing on generating adversarial sequences
combining API calls and static features (e.g., printable strings) that will be
misclassified by the classifier without affecting the malware functionality. We
show that this attack is effective against many classifiers due to the
transferability principle between RNN variants, feed forward DNNs, and
traditional machine learning classifiers such as SVM. We also implement GADGET,
a software framework to convert any malware binary to a binary undetected by
malware classifiers, using the proposed attack, without access to the malware
source code.Comment: Accepted as a conference paper at RAID 201
Android HIV: A Study of Repackaging Malware for Evading Machine-Learning Detection
Machine learning based solutions have been successfully employed for
automatic detection of malware in Android applications. However, machine
learning models are known to lack robustness against inputs crafted by an
adversary. So far, the adversarial examples can only deceive Android malware
detectors that rely on syntactic features, and the perturbations can only be
implemented by simply modifying Android manifest. While recent Android malware
detectors rely more on semantic features from Dalvik bytecode rather than
manifest, existing attacking/defending methods are no longer effective. In this
paper, we introduce a new highly-effective attack that generates adversarial
examples of Android malware and evades being detected by the current models. To
this end, we propose a method of applying optimal perturbations onto Android
APK using a substitute model. Based on the transferability concept, the
perturbations that successfully deceive the substitute model are likely to
deceive the original models as well. We develop an automated tool to generate
the adversarial examples without human intervention to apply the attacks. In
contrast to existing works, the adversarial examples crafted by our method can
also deceive recent machine learning based detectors that rely on semantic
features such as control-flow-graph. The perturbations can also be implemented
directly onto APK's Dalvik bytecode rather than Android manifest to evade from
recent detectors. We evaluated the proposed manipulation methods for
adversarial examples by using the same datasets that Drebin and MaMadroid (5879
malware samples) used. Our results show that, the malware detection rates
decreased from 96% to 1% in MaMaDroid, and from 97% to 1% in Drebin, with just
a small distortion generated by our adversarial examples manipulation method.Comment: 15 pages, 11 figure
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Learning-based pattern classifiers, including deep networks, have shown
impressive performance in several application domains, ranging from computer
vision to cybersecurity. However, it has also been shown that adversarial input
perturbations carefully crafted either at training or at test time can easily
subvert their predictions. The vulnerability of machine learning to such wild
patterns (also referred to as adversarial examples), along with the design of
suitable countermeasures, have been investigated in the research field of
adversarial machine learning. In this work, we provide a thorough overview of
the evolution of this research area over the last ten years and beyond,
starting from pioneering, earlier work on the security of non-deep learning
algorithms up to more recent work aimed to understand the security properties
of deep learning algorithms, in the context of computer vision and
cybersecurity tasks. We report interesting connections between these
apparently-different lines of work, highlighting common misconceptions related
to the security evaluation of machine-learning algorithms. We review the main
threat models and attacks defined to this end, and discuss the main limitations
of current work, along with the corresponding future challenges towards the
design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201
Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses
It has been shown that adversaries can craft example inputs to neural
networks which are similar to legitimate inputs but have been created to
purposely cause the neural network to misclassify the input. These adversarial
examples are crafted, for example, by calculating gradients of a carefully
defined loss function with respect to the input. As a countermeasure, some
researchers have tried to design robust models by blocking or obfuscating
gradients, even in white-box settings. Another line of research proposes
introducing a separate detector to attempt to detect adversarial examples. This
approach also makes use of gradient obfuscation techniques, for example, to
prevent the adversary from trying to fool the detector. In this paper, we
introduce stochastic substitute training, a gray-box approach that can craft
adversarial examples for defenses which obfuscate gradients. For those defenses
that have tried to make models more robust, with our technique, an adversary
can craft adversarial examples with no knowledge of the defense. For defenses
that attempt to detect the adversarial examples, with our technique, an
adversary only needs very limited information about the defense to craft
adversarial examples. We demonstrate our technique by applying it against two
defenses which make models more robust and two defenses which detect
adversarial examples.Comment: Accepted by AISec '18: 11th ACM Workshop on Artificial Intelligence
and Security. Source code at https://github.com/S-Mohammad-Hashemi/SS
- …