2 research outputs found
NAttack! Adversarial Attacks to bypass a GAN based classifier trained to detect Network intrusion
With the recent developments in artificial intelligence and machine learning,
anomalies in network traffic can be detected using machine learning approaches.
Before the rise of machine learning, network anomalies which could imply an
attack, were detected using well-crafted rules. An attacker who has knowledge
in the field of cyber-defence could make educated guesses to sometimes
accurately predict which particular features of network traffic data the
cyber-defence mechanism is looking at. With this information, the attacker can
circumvent a rule-based cyber-defense system. However, after the advancements
of machine learning for network anomaly, it is not easy for a human to
understand how to bypass a cyber-defence system. Recently, adversarial attacks
have become increasingly common to defeat machine learning algorithms. In this
paper, we show that even if we build a classifier and train it with adversarial
examples for network data, we can use adversarial attacks and successfully
break the system. We propose a Generative Adversarial Network(GAN)based
algorithm to generate data to train an efficient neural network based
classifier, and we subsequently break the system using adversarial attacks.Comment: 6 pages, 2 figures. 6th IEEE International Conference on Big Data
Security on Cloud (BigDataSecurity 2020
Evaluating and Improving Adversarial Robustness of Machine Learning-Based Network Intrusion Detectors
Machine learning (ML), especially deep learning (DL) techniques have been
increasingly used in anomaly-based network intrusion detection systems (NIDS).
However, ML/DL has shown to be extremely vulnerable to adversarial attacks,
especially in such security-sensitive systems. Many adversarial attacks have
been proposed to evaluate the robustness of ML-based NIDSs. Unfortunately,
existing attacks mostly focused on feature-space and/or white-box attacks,
which make impractical assumptions in real-world scenarios, leaving the study
on practical gray/black-box attacks largely unexplored.
To bridge this gap, we conduct the first systematic study of the
gray/black-box traffic-space adversarial attacks to evaluate the robustness of
ML-based NIDSs. Our work outperforms previous ones in the following aspects:
(i) practical-the proposed attack can automatically mutate original traffic
with extremely limited knowledge and affordable overhead while preserving its
functionality; (ii) generic-the proposed attack is effective for evaluating the
robustness of various NIDSs using diverse ML/DL models and non-payload-based
features; (iii) explainable-we propose an explanation method for the fragile
robustness of ML-based NIDSs. Based on this, we also propose a defense scheme
against adversarial attacks to improve system robustness. We extensively
evaluate the robustness of various NIDSs using diverse feature sets and ML/DL
models. Experimental results show our attack is effective (e.g., >97% evasion
rate in half cases for Kitsune, a state-of-the-art NIDS) with affordable
execution cost and the proposed defense method can effectively mitigate such
attacks (evasion rate is reduced by >50% in most cases).Comment: This article has been accepted for publication by IEEE JSA