A Sensitivity Analysis of Poisoning and Evasion Attacks in Network Intrusion Detection System Machine Learning Models

Abstract

As the demand for data has increased, we have witnessed a surge in the use of machine learning to help aid industry and government in making sense of massive amounts of data and, subsequently, making predictions and decisions. For the military, this surge has manifested itself in the Internet of Battlefield Things. The pervasive nature of data on today\u27s battlefield will allow machine learning models to increase soldier lethality and survivability. However, machine learning models are predicated upon the assumptions that the data upon which these machine learning models are being trained is truthful and the machine learning models are not compromised. These assumptions surrounding the quality of data and models cannot be the status-quo going forward as attackers establish novel methods to exploit machine learning models for their benefit. These novel attack methods can be described as adversarial machine learning (AML). These attacks allow an attacker to unsuspectingly alter a machine learning model before and after model training in order to degrade a model\u27s ability to detect malicious activity. In this paper, we show how AML, by poisoning data sets and evading well trained models, affect machine learning models\u27 ability to function as Network Intrusion Detection Systems (NIDS). Finally, we highlight why evasion attacks are especially effective in this setting and discuss some of the causes for this degradation of model effectiveness

    Similar works