9 research outputs found

    Adversarial Machine Learning in Network Intrusion Detection Systems

    Full text link
    Adversarial examples are inputs to a machine learning system intentionally crafted by an attacker to fool the model into producing an incorrect output. These examples have achieved a great deal of success in several domains such as image recognition, speech recognition and spam detection. In this paper, we study the nature of the adversarial problem in Network Intrusion Detection Systems (NIDS). We focus on the attack perspective, which includes techniques to generate adversarial examples capable of evading a variety of machine learning models. More specifically, we explore the use of evolutionary computation (particle swarm optimization and genetic algorithm) and deep learning (generative adversarial networks) as tools for adversarial example generation. To assess the performance of these algorithms in evading a NIDS, we apply them to two publicly available data sets, namely the NSL-KDD and UNSW-NB15, and we contrast them to a baseline perturbation method: Monte Carlo simulation. The results show that our adversarial example generation techniques cause high misclassification rates in eleven different machine learning models, along with a voting classifier. Our work highlights the vulnerability of machine learning based NIDS in the face of adversarial perturbation.Comment: 25 pages, 6 figures, 4 table

    Dimensionality reduction in generating decision-based adversarial examples

    Get PDF
    As the ever-increasing popularity of machine learning continues to rise, its reliability is of utmost importance. Despite the enormous progress and its excellent performance, it has been discovered that machine learning is vulnerable to so-called adversarial examples. While they are based on a naturally occurring machine learning phenomenon, they can be exploited by malicious attackers for their own purposes, as it is possible to generate them. In the decision-based attack category, the key to the more efficient generation of adversarial examples has been dimensionality reduction. However, the effect different dimensionality reduction methods and their magnitudes have in this generation process has been poorly researched. The thesis provides a diverse view of the phenomenon and discusses the often exaggerated security concerns raised by adversarial examples before focusing more specifically on the decision-based attack scenario. We explore three main research questions. First, we explore the current state and challenges of decision-based attacks in the scientific literature. Secondly, we present a few dimensionality reduction methods that could be used for our purposes. Lastly, tests are implemented to quantify the differences the dimensionality reduction methods with their differently sized subspaces have on the adversarial example generation. The study showcases these differences and identifies the turning point after which the dimensionality reduction starts to be detrimental. We provide novel ways to perform dimensionality reduction and discuss the advantages and disadvantages of the methods. We demonstrate that there is room for improvement in generating decision-based adversarial examples by utilizing more extensive dimensionality reduction than is customary in the scientific literature.Koneoppimisen laajamittaisen suosion kasvaessa sen toimintavarmuus tilanteesta riippumatta on äärimmäisen tärkeää. Valtavista edistysaskelista ja erinomaisista suorituksista huolimatta koneoppimisen on havaittu olevan haavoittuvainen niin kutsuttuja peukaloituja syötteitä kohtaan. Vaikka ilmiö pohjautuukin luonnolliseen koneoppimisen ilmiöön, voi pahansuopa hyökkääjä hyväksikäyttää kyseistä ilmiötä generoimalla peukaloituja syötteitä. Avain tehokkaampaan päätöspohjaisten hyökkäysten generoimiseen on ollut dimensioiden vähentäminen. Eri dimensioiden vähentämiseen keskittyvien menetelmien ja niiden suuruuksien vaikutusta tähän generoimiseen ei ole kuitenkaan tutkittu kovin syvällisesti. Tässä työssä tarjoamme monipuolisen yleiskatsauksen peukaloiduista syötteistä ja käymme läpi niiden usein liioiteltuja turvallisuushuolia, minkä jälkeen keskitymme tarkemmin päätöspohjaisiin hyökkäyksiin. Tutkimme kolmea tutkimuskysymystä: ensiksi tutkimme tieteellisessä kirjallisuudessa esiintyvien päätöspohjaisten hyökkäysten nykytilaa sekä niiden kohtaamia haasteita. Sen jälkeen esittelemme muutamia tähän käyttötarkoitukseen soveltuvia menetelmiä dimensioiden vähentämiseen. Viimeisenä suoritamme kokeita mitataksemme, kuinka eri dimensioiden vähentämismenetelmät ja niiden erikokoiset aliavaruudet vaikuttavat peukaloitujen syötteiden generoimiseen. Tutkielma esittelee menetelmien väliset erot ja havainnollistaa käännekohdan, jonka jälkeen dimensioiden vähentäminen alkaa haitata generoimista. Tuomme esiin uudenlaisia tapoja dimensioiden vähentämiseen ja teemme yhteenvedon eri menetelmien hyödyistä ja haitoista. Lisäksi osoitamme, kuinka päätöspohjaisten peukaloitujen syötteiden generoimisessa on parantamisen varaa hyödyntämällä aiempaa kirjallisuutta laajamittaisempaa dimensioiden vähentämistä

    Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness

    Get PDF
    Deep neural networks have been achieving state-of-the-art performance across a wide variety of applications, and due to their outstanding performance, they are being deployed in safety and security critical systems. However, in recent years, deep neural networks have been shown to be very vulnerable to optimally crafted input samples called adversarial examples. Although the adversarial perturbations are imperceptible to humans, especially, in the domain of computer vision, they have been very successful in fooling strong deep models.The vulnerability of deep models to adversarial attacks limits their widespread deployment for safety-critical applications. As a result, adversarial attack and defense algorithms have drawn great attention in the literature. Many defense algorithms have been proposed to overcome the threat of adversarial attacks, and many of these algorithms use adversarial training (adding perturbations during the training stage). Alongside other adversarial defense approaches being investigated, there has been a very recent interest in improving adversarial robustness in deep neural networks through the introduction of perturbations during the training process. However, such methods leverage fixed, pre-defined perturbations and require significant hyper-parameter tuning that makes them very difficult to leverage in a general fashion. In this work, we introduce Learn2Perturb, an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks. More specifically, we introduce novel perturbation-injection modules that are incorporated at each layer to perturb the feature space and increase uncertainty in the network. This feature perturbation is performed at both the training and the inference stages. Furthermore, inspired by the Expectation-Maximization approach, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively. Experimental results on CIFAR-10 and CIFAR-100 datasets show that the proposed Learn2Perturb method can result in deep neural networks which are 4–7 percent more robust on Linf FGSM and PDG adversarial attacks and significantly outperforms the state-of-the-art against L2 C&W attack and a wide range of well-known black-box attacks

    A survey of uncertainty in deep neural networks

    Get PDF
    Over the last decade, neural networks have reached almost every field of science and become a crucial part of various real world applications. Due to the increasing spread, confidence in neural network predictions has become more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over- or under-confidence, i.e. are badly calibrated. To overcome this, many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and various approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. For that, a comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and irreducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks (BNNs), ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for calibrating neural networks, and give an overview of existing baselines and available implementations. Different examples from the wide spectrum of challenges in the fields of medical image analysis, robotics, and earth observation give an idea of the needs and challenges regarding uncertainties in the practical applications of neural networks. Additionally, the practical limitations of uncertainty quantification methods in neural networks for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given
    corecore