4 research outputs found

    Novel Approach for Intrusion Detection Using Simulated Annealing Algorithm Combined with Hopfield Neural Network

    Get PDF
    With the continued increase in Internet usage, the risk of encountering online threats remains high. This study proposes a new approach for intrusion detection to produce better outcomes than similar approaches with high accuracy rates. The proposed approach uses Simulated Annealing algorithms [1] combined with Hopfield Neural network [2] for supervised learning to improve performance by increasing the correctness of true detection and reducing the error rates as a result of false detection. The proposed approach is evaluated on an intrusion detection data set called KDD99[3]. Experimental tests demonstrate the potential of the proposed approach to rapidly detect high precision and efficiency intrusion behaviors. The proposed approach offers a 99.16% accuracy rate and a 0.3% false-positive rate.Department of Information Technology

    Defining Generic Attributes for IDS Classification

    Get PDF
    Detection accuracy of Intrusion Detection System (IDS) depends on classifying network traffic based on data features. Using all features for classification consumes more computation time and computer resources. Some of these features may be redundant and irrelevant therefore, they affect the detection of traffic anomalies and the overall performance of the IDS. The literature proposed different algorithms and techniques to define the most relevant sets of features of KDD cup 1999 that can achieve high detection accuracy and maintain the same performance as the total data features. However, all these algorithms and techniques did not produce optimal solutions even when they utilized same datasets. In this paper, a new approach is proposed to analyze the researches that have been conducted on KDD cup 1999 for features selection to define the possibility of determining effective generic features of the common dataset KDD cup 1999 for constructing an efficient classification model. The approach does not rely on algorithms, which shortens the computational cost and reduces the computer resources. The essence of the approach is based on selecting the most frequent features of each class and all classes in all researches, then a threshold is used to define the most significant generic features. The results revealed two sets of features containing 7 and 8 features. The classification accuracy by using eight features is almost the same as using all dataset features

    Anomaly-based network intrusion detection enhancement by prediction threshold adaptation of binary classification models

    Get PDF
    Network traffic exhibits a high level of variability over short periods of time. This variability impacts negatively on the performance (accuracy) of anomaly-based network Intrusion Detection Systems (IDS) that are built using predictive models in a batch-learning setup. This thesis investigates how adapting the discriminating threshold of model predictions, specifically to the evaluated traffic, improves the detection rates of these Intrusion Detection models. Specifically, this thesis studied the adaptability features of three well known Machine Learning algorithms: C5.0, Random Forest, and Support Vector Machine. The ability of these algorithms to adapt their prediction thresholds was assessed and analysed under different scenarios that simulated real world settings using the prospective sampling approach. A new dataset (STA2018) was generated for this thesis and used for the analysis. This thesis has demonstrated empirically the importance of threshold adaptation in improving the accuracy of detection models when training and evaluation (test) traffic have different statistical properties. Further investigation was undertaken to analyse the effects of feature selection and data balancing processes on a model’s accuracy when evaluation traffic with different significant features were used. The effects of threshold adaptation on reducing the accuracy degradation of these models was statistically analysed. The results showed that, of the three compared algorithms, Random Forest was the most adaptable and had the highest detection rates. This thesis then extended the analysis to apply threshold adaptation on sampled traffic subsets, by using different sample sizes, sampling strategies and label error rates. This investigation showed the robustness of the Random Forest algorithm in identifying the best threshold. The Random Forest algorithm only needed a sample that was 0.05% of the original evaluation traffic to identify a discriminating threshold with an overall accuracy rate of nearly 90% of the optimal threshold."This research was supported and funded by the Government of the Sultanate of Oman represented by the Ministry of Higher Education and the Sultan Qaboos University." -- p. i
    corecore