19 research outputs found

    In-depth comparative evaluation of supervised machine learning approaches for detection of cybersecurity threats

    Get PDF
    This paper describes the process and results of analyzing CICIDS2017, a modern, labeled data set for testing intrusion detection systems. The data set is divided into several days, each pertaining to different attack classes (Dos, DDoS, infiltration, botnet, etc.). A pipeline has been created that includes nine supervised learning algorithms. The goal was binary classification of benign versus attack traffic. Cross-validated parameter optimization, using a voting mechanism that includes five classification metrics, was employed to select optimal parameters. These results were interpreted to discover whether certain parameter choices were dominant for most (or all) of the attack classes. Ultimately, every algorithm was retested with optimal parameters to obtain the final classification scores. During the review of these results, execution time, both on consumerand corporate-grade equipment, was taken into account as an additional requirement. The work detailed in this paper establishes a novel supervised machine learning performance baseline for CICIDS2017

    Classification hardness for supervised learners on 20 years of intrusion detection data

    Get PDF
    This article consolidates analysis of established (NSL-KDD) and new intrusion detection datasets (ISCXIDS2012, CICIDS2017, CICIDS2018) through the use of supervised machine learning (ML) algorithms. The uniformity in analysis procedure opens up the option to compare the obtained results. It also provides a stronger foundation for the conclusions about the efficacy of supervised learners on the main classification task in network security. This research is motivated in part to address the lack of adoption of these modern datasets. Starting with a broad scope that includes classification by algorithms from different families on both established and new datasets has been done to expand the existing foundation and reveal the most opportune avenues for further inquiry. After obtaining baseline results, the classification task was increased in difficulty, by reducing the available data to learn from, both horizontally and vertically. The data reduction has been included as a stress-test to verify if the very high baseline results hold up under increasingly harsh constraints. Ultimately, this work contains the most comprehensive set of results on the topic of intrusion detection through supervised machine learning. Researchers working on algorithmic improvements can compare their results to this collection, knowing that all results reported here were gathered through a uniform framework. This work's main contributions are the outstanding classification results on the current state of the art datasets for intrusion detection and the conclusion that these methods show remarkable resilience in classification performance even when aggressively reducing the amount of data to learn from

    The relevance of material and processing parameters on the thermal conductivity of thermoplastic composites

    Get PDF
    Thermoplastics composites show vast promise as an alternative for thermal management applications in the scope of the development of next-generation electronics and heat exchangers. Their low cost, reduced weight, and corrosion resistance make them an attractive replacer for traditionally used metals, in case their thermal conductivity (TC) can be sufficiently increased by designing the material (e.g., filler type and shape) and processing (e.g., dispersion quality, mixing, and shaping) parameters. In the present contribution, the relevance of both types of parameters is discussed, and guidelines are formulated for future research to increase the TC of thermoplastic polymer composites. POLYM. ENG. SCI., 58:466-474, 2018. (c) 2017 Society of Plastics Engineer

    Discovery and characterization of flaws in machine-learned network intrusion detection systems and data sets

    No full text
    Het doctoraatsonderzoek van Laurens D'hooge speelt zich af op de intersectie tussen data science en netwerkbeveiliging. De explosieve groei aan cyberaanvallen, zowel in volume als in variëteit, creëert grote uitdagingen voor klassieke netwerkbeveiligingsproducten die door hun design niet even snel, noch schaalbaar mee kunnen evolueren. Hoewel men in onderzoek reeds 15 jaar werkt aan alternatieve methoden die een robuustere, meer algemene oplossing beloven, blijft een (commerciële) doorbraak uit. Doorheen het onderzoekstraject in deze doctoraatsthesis wordt duidelijk waarom. Academisch onderzoek naar netwerk-intrusiedetectie kampt met een gebrek aan realiteitszin in de evaluatie van nieuwe detectie modellen waardoor die wel goed presteren in labo-omstandigheden maar nergens anders. Een belangrijke bijdrage aan die schijnbaar goede detectie komt voort uit de state-of-the-art datasets waarmee alle onderzoekers in het domein hun methoden valideren. Daarvan wordt aangetoond dat die beduidend minder kwaliteitsvol zijn dan eerder aangenomen. Ook de toegevoegde waarde van steeds grotere en duurdere statistische modellen wordt op de proef gesteld. Een herbalancering van het onderzoeksdomein richting betere datasets en een gesloten feedbacklus tussen het academisch onderzoek en de praktische werkzaamheid ervan zullen essentieel zijn om dit onderzoek alsnog door te laten breken om de computernetwerken van de toekomst te beveiligen

    The importance of establishing baselines in ML classification tasks

    No full text
    Machine learning is finding its way into more and more application domains, often reaching outstanding results. However, most of its premier results have been achieved in vision and language tasks. The new methods which power those results are then eagerly adopted in a broader set of research fields to solve other tasks, typically with other types of data. This demonstration will show that the blind adoption of computationally expensive methods has caused more harm than good in my research field, (network / host) intrusion detection and in the application of data science in cybersecurity tasks more broadly. Foregoing establishing proper baselines with simple methods has deluded many researchers into hailing their methods as improvements over the state-of-the-art, when in fact they are not. This demo is targeted at researchers who are applying ML to any task with structured input data

    Hierarchical feature block ranking for data-efficient intrusion detection modeling

    No full text
    The intrusion detection field has been increasing the adoption of newer datasets after relying mainly on KDD99 and NSL-KDD. Both the height and the width of the newer datasets have increased substantially since they are geared towards evaluation by machine learning methods. The feature sets however are most often statistics, derived either from the packets, or more commonly from the (reconstructed) flows. The ease with which connected clusters of features can be extracted as well as the tendency to be overinclusive to provide researchers with as much data as possible has introduced significant bloat in the datasets. In order to improve the effective and efficient use of the datasets, this article proposes a hybrid feature selection mechanism based on a first-pass filter method and a second-pass embedded method with a central role for statistical testing to identify hierarchies of dominant feature sets. The non-destructive approach allows for the hierarchies to be inspected, interpreted and related to each other. The proposed approach is validated by constructing the feature hierarchies at three different resolutions for all recent datasets published by the Canadian Institute for Cybersecurity (IDS2017, DoS2017, IDS2018 and DDoS2019, millions of samples, 76 features). Three standard supervised learners were given increasing access to the feature (blocks) in terms of their hierarchical position. The results show that attack classes with a clear network component can be detected with cross-validated balanced accuracy, precision and recall above 99%, even when the classification model has been built from just 1 to 4 features, while additionally under a very restrictive sampling regimen: training (0.8%), validation (0.2%) and testing (99%). When selecting models only for classification performance more attack classes are detected more reliably, and while this increases feature use to an average of 12, this is still preferable over using the datasets' standard set of 76 features

    Establishing the contaminating effect of metadata feature inclusion in machine-learned network intrusion detection models

    No full text
    Modern datasets in intrusion detection are designed to be evaluated by machine learning techniques and often contain metadata features which ought to be removed prior to training. Unfortunately many published articles include (at least) one such metadata feature in their models, namely destination port. In this article, it is shown experimentally that this feature acts as a prime target for shortcut learning. When used as the only predictor, destination port can separate ten state of the art intrusion detection datasets (CIC collection, UNSW-NB15, CIDDS collection, CTU-13, NSL-KDD and ISCX-IDS2012) with 70 to 100% accuracy on class-balanced test sets. Any model that includes this feature will learn this strong relationship during training which is only meaningful within the dataset. Dataset authors can take countermeasures against this influence, but when applied properly, the feature becomes non-informative and could just as easily not have been part of the dataset in the first place. Consequently, this is the central recommendation in this article. Dataset users should not include destination port (or any other metadata feature) in their models and dataset authors should avoid giving their users the opportunity to use them

    Unsupervised machine learning techniques for network intrusion detection on modern data

    No full text
    The rapid growth of the internet, connecting billions of people and businesses, brings with it an increased risk of misuse. Handling this misuse requires adaptive techniques detecting known as well as unknown, zero-day, attacks. The latter proved most challenging in recent studies, where supervised machine learning techniques excelled at detecting known attacks, but failed to recognize unknown patterns. Therefore, this paper focuses on anomaly-based detection of malicious behavior on the network by using flow-based features. Four unsupervised methods are evaluated of which two employ a self-supervised learning approach. A realistic modern dataset, CIC-IDS-2017, containing multiple different attack types is used to evaluate the proposed models in terms of classification performance and computational complexity. The results show that an autoencoder, obtained from the field of deep-learning, yields the highest area under the Receiver Operating Characteristics (AUROC) of 0.978 while maintaining an acceptable computational complexity, followed by one-class support vector machine, isolation forest and principal components analysis

    Towards model generalization for intrusion detection : unsupervised machine learning techniques

    No full text
    Through the ongoing digitization of the world, the number of connected devices is continuously growing without any foreseen decline in the near future. In particular, these devices increasingly include critical systems such as power grids and medical institutions, possibly causing tremendous consequences in the case of a successful cybersecurity attack. A network intrusion detection system (NIDS) is one of the main components to detect ongoing attacks by differentiating normal from malicious traffic. Anomaly-based NIDS, more specifically unsupervised methods previously proved promising for their ability to detect known as well as zero-day attacks without the need for a labeled dataset. Despite decades of development by researchers, anomaly-based NIDS are only rarely employed in real-world applications, most possibly due to the lack of generalization power of the proposed models. This article first evaluates four unsupervised machine learning methods on two recent datasets and then defines their generalization strength using a novel inter-dataset evaluation strategy estimating their adaptability. Results show that all models can present high classification scores on an individual dataset but fail to directly transfer those to a second unseen but related dataset. Specifically, the accuracy dropped on average 25.63% in an inter-dataset setting compared to the conventional evaluation approach. This generalization challenge can be observed and tackled in future research with the help of the proposed evaluation strategy in this paper
    corecore