380 research outputs found

    A Few-Shot Learning-Based Siamese Capsule Network for Intrusion Detection with Imbalanced Training Data

    Get PDF
    Network intrusion detection remains one of the major challenges in cybersecurity. In recent years, many machine-learning-based methods have been designed to capture the dynamic and complex intrusion patterns to improve the performance of intrusion detection systems. However, two issues, including imbalanced training data and new unknown attacks, still hinder the development of a reliable network intrusion detection system. In this paper, we propose a novel few-shot learning-based Siamese capsule network to tackle the scarcity of abnormal network traffic training data and enhance the detection of unknown attacks. In specific, the well-designed deep learning network excels at capturing dynamic relationships across traffic features. In addition, an unsupervised subtype sampling scheme is seamlessly integrated with the Siamese network to improve the detection of network intrusion attacks under the circumstance of imbalanced training data. Experimental results have demonstrated that the metric learning framework is more suitable to extract subtle and distinctive features to identify both known and unknown attacks after the sampling scheme compared to other supervised learning methods. Compared to the state-of-the-art methods, our proposed method achieves superior performance to effectively detect both types of attacks

    Leveraging siamese networks for one-shot intrusion detection model

    Get PDF
    The use of supervised Machine Learning (ML) to enhance Intrusion Detection Systems (IDS) has been the subject of significant research. Supervised ML is based upon learning by example, demanding significant volumes of representative instances for effective training and the need to retrain the model for every unseen cyber-attack class. However, retraining the models in-situ renders the network susceptible to attacks owing to the time-window required to acquire a sufficient volume of data. Although anomaly detection systems provide a coarse-grained defence against unseen attacks, these approaches are significantly less accurate and suffer from high false-positive rates. Here, a complementary approach referred to as “One-Shot Learning”, whereby a limited number of examples of a new attack-class is used to identify a new attack-class (out of many) is detailed. The model grants a new cyber-attack classification opportunity for classes that were not seen during training without retraining. A Siamese Network is trained to differentiate between classes based on pairs similarities, rather than features, allowing to identify new and previously unseen attacks. The performance of a pre-trained model to classify new attack-classes based only on one example is evaluated using three mainstream IDS datasets; CICIDS2017, NSL-KDD, and KDD Cup’99. The results confirm the adaptability of the model in classifying unseen attacks and the trade-off between performance and the need for distinctive class representations.</p

    LIPSNN: A Light Intrusion-Proving Siamese Neural Network Model for Facial Verification

    Get PDF
    Facial verification has experienced a breakthrough in recent years, not only due to the improvement in accuracy of the verification systems but also because of their increased use. One of the main reasons for this has been the appearance and use of new models of Deep Learning to address this problem. This extension in the use of facial verification has had a high impact due to the importance of its applications, especially on security, but the extension of its use could be significantly higher if the problem of the required complex calculations needed by the Deep Learning models, that usually need to be executed on machines with specialised hardware, were solved. That would allow the use of facial verification to be extended, making it possible to run this software on computers with low computing resources, such as Smartphones or tablets. To solve this problem, this paper presents the proposal of a new neural model, called Light Intrusion-Proving Siamese Neural Network, LIPSNN. This new light model, which is based on Siamese Neural Networks, is fully presented from the description of its two block architecture, going through its development, including its training with the well- known dataset Labeled Faces in the Wild, LFW; to its benchmarking with other traditional and deep learning models for facial verification in order to compare its performance for its use in low computing resources systems for facial recognition. For this comparison the attribute parameters, storage, accuracy and precision have been used, and from the results obtained it can be concluded that the LIPSNN can be an alternative to the existing models to solve the facet problem of running facial verification in low computing resource devices

    Improved Bidirectional GAN-Based Approach for Network Intrusion Detection Using One-Class Classifier

    Get PDF
    Existing generative adversarial networks (GANs), primarily used for creating fake image samples from natural images, demand a strong dependence (i.e., the training strategy of the generators and the discriminators require to be in sync) for the generators to produce as realistic fake samples that can “fool” the discriminators. We argue that this strong dependency required for GAN training on images does not necessarily work for GAN models for network intrusion detection tasks. This is because the network intrusion inputs have a simpler feature structure such as relatively low-dimension, discrete feature values, and smaller input size compared to the existing GAN-based anomaly detection tasks proposed on images. To address this issue, we propose a new Bidirectional GAN (Bi-GAN) model that is better equipped for network intrusion detection with reduced overheads involved in excessive training. In our proposed method, the training iteration of the generator (and accordingly the encoder) is increased separate from the training of the discriminator until it satisfies the condition associated with the cross-entropy loss. Our empirical results show that this proposed training strategy greatly improves the performance of both the generator and the discriminator even in the presence of imbalanced classes. In addition, our model offers a new construct of a one-class classifier using the trained encoder–discriminator. The one-class classifier detects anomalous network traffic based on binary classification results instead of calculating expensive and complex anomaly scores (or thresholds). Our experimental result illustrates that our proposed method is highly effective to be used in network intrusion detection tasks and outperforms other similar generative methods on two datasets: NSL-KDD and CIC-DDoS2019 datasets.Publishe
    • …
    corecore