17,645 research outputs found
IDSGAN: Generative Adversarial Networks for Attack Generation against Intrusion Detection
As an important tool in security, the intrusion detection system bears the
responsibility of the defense to network attacks performed by malicious
traffic. Nowadays, with the help of machine learning algorithms, the intrusion
detection system develops rapidly. However, the robustness of this system is
questionable when it faces the adversarial attacks. To improve the detection
system, more potential attack approaches should be researched. In this paper, a
framework of the generative adversarial networks, IDSGAN, is proposed to
generate the adversarial attacks, which can deceive and evade the intrusion
detection system. Considering that the internal structure of the detection
system is unknown to attackers, adversarial attack examples perform the
black-box attacks against the detection system. IDSGAN leverages a generator to
transform original malicious traffic into adversarial malicious traffic. A
discriminator classifies traffic examples and simulates the black-box detection
system. More significantly, we only modify part of the attacks' nonfunctional
features to guarantee the validity of the intrusion. Based on the dataset
NSL-KDD, the feasibility of the model is demonstrated to attack many detection
systems with different attacks and the excellent results are achieved.
Moreover, the robustness of IDSGAN is verified by changing the amount of the
unmodified features.Comment: 8 pages, 5 figure
Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework
Randomized classifiers have been shown to provide a promising approach for
achieving certified robustness against adversarial attacks in deep learning.
However, most existing methods only leverage Gaussian smoothing noise and only
work for perturbation. We propose a general framework of adversarial
certification with non-Gaussian noise and for more general types of attacks,
from a unified functional optimization perspective. Our new framework allows us
to identify a key trade-off between accuracy and robustness via designing
smoothing distributions, helping to design new families of non-Gaussian
smoothing distributions that work more efficiently for different
settings, including , and attacks. Our proposed
methods achieve better certification results than previous works and provide a
new perspective on randomized smoothing certification
- β¦