Investigating adversarial attacks against Random Forest-based network attack detection systems

Abstract

International audienceA significant research effort in cybersecurity currently deals with Machine Learning-based attack detection. It is aimed at providing autonomous attack detection systems that require less human expert resources, and are then less expensive in time and money. Indeed, such systems are able to autonomously learn about benign and malicious traffic, and to classify further traffic samples accordingly. In such context, hackers start designing adversarial learning approaches in order to design new attacks able to evade from the Machine Learningbased detection systems. The work presented in this paper aims at exhibiting how easy it is to modify existing attacks to make them evade from the Machine Learning-based attack detectors. The Random Forest algorithm has been selected for this work as it is globally evaluated as one of the best Machine Learning algorithm for cybersecurity, and it provides informations on how a decision is made. Indeed, the analysis of the related Random Forest trees helps explaining the limits of this Machine Learning algorithm, and gives some information that could be helpful for making attack detection somewhat explainable. Several other Machine Learning algorithms as SVM, kNN ans LSTM have been selected for evaluating their ability to detect the adversarial attack presented in this paper

    Similar works