121 research outputs found

    Insider Threat Detection Using Supervised Machine Learning Algorithms on an Extremely Imbalanced Dataset

    Get PDF
    An insider threat can take on many forms and fall under different categories. This includes: malicious insider, careless/unaware/uneducated/naïve employee, and third-party contractor. A malicious insider, which can be a criminal agent recruited as a legitimate candidate or a disgruntled employee seeking revenge, is likely the most difficult category to detect, prevent and mitigate. Some malicious insiders misuse their positions of trust by disrupting normal operations, while others transfer confidential or vital information about the victim organisation which can damage the employer's marketing position and/or reputation. In addition, some just lose their credentials (i.e. usernames and passwords) which can then be abused or stolen by an external hacker to breach the network using their name. Additionally, malicious insiders have free rein to roam a victim organisation unconstrained which can lead to successfully collecting personal information of other colleagues and/or clients, or even installing malicious software into the system/network.Machine learning techniques have been studied in published literature as a promising solution for such threats. However, they can be bias and/or inaccurate when the associated dataset is hugely imbalanced. In this case, an inaccurate classification could result in a huge cost to individuals and/or organisations. Therefore, this paper addresses the insider threat detection on an extremely imbalanced dataset which includes employing a popular balancing technique known as spread subsample. Our results show that although balancing our dataset using this technique did not improve performance metrics such as: classification accuracy, true positive rate, false positive rate, precision, recall, and f-measure it did improve the time taken to build the model and the time taken to test the model. Additionally, we realised that running our chosen classifiers with parameters other than the default ones has an impact on both balanced and imbalanced scenarios but the impact is significantly stronger when using the imbalanced dataset

    Deep learning with focal loss approach for attacks classification

    Get PDF
    The rapid development of deep learning improves the detection and classification of attacks on intrusion detection systems. However, the unbalanced data issue increases the complexity of the architecture model. This study proposes a novel deep learning model to overcome the problem of classifying multi-class attacks. The deep learning model consists of two stages. The pre-tuning stage uses automatic feature extraction with a deep autoencoder. The second stage is fine-tuning using deep neural network classifiers with fully connected layers. To reduce imbalanced class data, the feature extraction was implemented using the deep autoencoder and improved focal loss function in the classifier. The model was evaluated using 3 loss functions, including cross-entropy, weighted cross-entropy, and focal losses. The results could correct the class imbalance in deep learning-based classifications. Attack classification was achieved using automatic extraction with the focal loss on the CSE-CIC-IDS2018 dataset is a high-quality classifier with 98.38% precision, 98.27% sensitivity, and 99.82% specificity
    • …
    corecore