457 research outputs found
Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers
In this paper, we present a black-box attack against API call based machine
learning malware classifiers, focusing on generating adversarial sequences
combining API calls and static features (e.g., printable strings) that will be
misclassified by the classifier without affecting the malware functionality. We
show that this attack is effective against many classifiers due to the
transferability principle between RNN variants, feed forward DNNs, and
traditional machine learning classifiers such as SVM. We also implement GADGET,
a software framework to convert any malware binary to a binary undetected by
malware classifiers, using the proposed attack, without access to the malware
source code.Comment: Accepted as a conference paper at RAID 201
IDSGAN: Generative Adversarial Networks for Attack Generation against Intrusion Detection
As an important tool in security, the intrusion detection system bears the
responsibility of the defense to network attacks performed by malicious
traffic. Nowadays, with the help of machine learning algorithms, the intrusion
detection system develops rapidly. However, the robustness of this system is
questionable when it faces the adversarial attacks. To improve the detection
system, more potential attack approaches should be researched. In this paper, a
framework of the generative adversarial networks, IDSGAN, is proposed to
generate the adversarial attacks, which can deceive and evade the intrusion
detection system. Considering that the internal structure of the detection
system is unknown to attackers, adversarial attack examples perform the
black-box attacks against the detection system. IDSGAN leverages a generator to
transform original malicious traffic into adversarial malicious traffic. A
discriminator classifies traffic examples and simulates the black-box detection
system. More significantly, we only modify part of the attacks' nonfunctional
features to guarantee the validity of the intrusion. Based on the dataset
NSL-KDD, the feasibility of the model is demonstrated to attack many detection
systems with different attacks and the excellent results are achieved.
Moreover, the robustness of IDSGAN is verified by changing the amount of the
unmodified features.Comment: 8 pages, 5 figure
Artificial intelligence in the cyber domain: Offense and defense
Artificial intelligence techniques have grown rapidly in recent years, and their applications in practice can be seen in many fields, ranging from facial recognition to image analysis. In the cybersecurity domain, AI-based techniques can provide better cyber defense tools and help adversaries improve methods of attack. However, malicious actors are aware of the new prospects too and will probably attempt to use them for nefarious purposes. This survey paper aims at providing an overview of how artificial intelligence can be used in the context of cybersecurity in both offense and defense.Web of Science123art. no. 41
Adversarial Attacks on Remote User Authentication Using Behavioural Mouse Dynamics
Mouse dynamics is a potential means of authenticating users. Typically, the
authentication process is based on classical machine learning techniques, but
recently, deep learning techniques have been introduced for this purpose.
Although prior research has demonstrated how machine learning and deep learning
algorithms can be bypassed by carefully crafted adversarial samples, there has
been very little research performed on the topic of behavioural biometrics in
the adversarial domain. In an attempt to address this gap, we built a set of
attacks, which are applications of several generative approaches, to construct
adversarial mouse trajectories that bypass authentication models. These
generated mouse sequences will serve as the adversarial samples in the context
of our experiments. We also present an analysis of the attack approaches we
explored, explaining their limitations. In contrast to previous work, we
consider the attacks in a more realistic and challenging setting in which an
attacker has access to recorded user data but does not have access to the
authentication model or its outputs. We explore three different attack
strategies: 1) statistics-based, 2) imitation-based, and 3) surrogate-based; we
show that they are able to evade the functionality of the authentication
models, thereby impacting their robustness adversely. We show that
imitation-based attacks often perform better than surrogate-based attacks,
unless, however, the attacker can guess the architecture of the authentication
model. In such cases, we propose a potential detection mechanism against
surrogate-based attacks.Comment: Accepted in 2019 International Joint Conference on Neural Networks
(IJCNN). Update of DO
- …