471 research outputs found

    Adaptive Learning Based Whale Optimization and Convolutional Neural Network Algorithm for Distributed Denial of Service Attack Detection in Software Defined Network Environment

    Get PDF
    SDNs (Software Defined Networks) have emerged as a game-changing network concept. It can fulfill the ever-increasing needs of future networks and is increasingly being employed in data centres and operator networks. It does, however, confront certain fundamental security concerns, such as DDoS (Distributed Denial of Service) assaults. To address the aforementioned concerns, the ALWO+CNN method, which combines ALWOs (Adaptive Learning based Whale Optimizations) with CNNs (Convolution Neural Networks), is suggested in this paper. Initially, preprocessing is performed using the KMC (K-Means Clustering) algorithm, which is used to significantly reduce noise data. The preprocessed data is then used in the feature selection process, which is carried out by ALWOs. Its purpose is to pick out important and superfluous characteristics from the dataset. It enhances DDoS classification accuracy by using the best algorithms.  The selected characteristics are then used in the classification step, where CNNs are used to identify and categorize DDoS assaults efficiently. Finally, the ALWO+CNN algorithm is used to leverage the rate and asymmetry properties of the flows in order to detect suspicious flows specified by the detection trigger mechanism. The controller will next take the necessary steps to defend against DDoS assaults. The ALWO+CNN algorithm greatly improves detection accuracy and efficiency, as well as preventing DDoS assaults on SDNs. Based on the experimental results, it was determined that the suggested ALWO+CNN method outperforms current algorithms in terms of better accuracies, precisions, recalls, f-measures, and computational complexities

    Network-Aware AutoML Framework for Software-Defined Sensor Networks

    Full text link
    As the current detection solutions of distributed denial of service attacks (DDoS) need additional infrastructures to handle high aggregate data rates, they are not suitable for sensor networks or the Internet of Things. Besides, the security architecture of software-defined sensor networks needs to pay attention to the vulnerabilities of both software-defined networks and sensor networks. In this paper, we propose a network-aware automated machine learning (AutoML) framework which detects DDoS attacks in software-defined sensor networks. Our framework selects an ideal machine learning algorithm to detect DDoS attacks in network-constrained environments, using metrics such as variable traffic load, heterogeneous traffic rate, and detection time while preventing over-fitting. Our contributions are two-fold: (i) we first investigate the trade-off between the efficiency of ML algorithms and network/traffic state in the scope of DDoS detection. (ii) we design and implement a software architecture containing open-source network tools, with the deployment of multiple ML algorithms. Lastly, we show that under the denial of service attacks, our framework ensures the traffic packets are still delivered within the network with additional delays

    Towards Effective Detection of Botnet Attacks using BoT-IoT Dataset

    Get PDF
    In the world of cybersecurity, intrusion detection systems (IDS) have leveraged the power of artificial intelligence for the efficient detection of attacks. This is done by applying supervised machine learning (ML) techniques on labeled datasets. A growing body of literature has been devoted to the use of BoT-IoT dataset for IDS based ML frameworks. A few number of related works have recognized the need for a balanced dataset and applied techniques to alleviate the issue of imbalance. However, a significant amount of related research works failed to treat the imbalance in the BoT-IoT dataset. A lack of unanimity was observed in the literature towards the definition of taxonomy for balancing techniques. The study presented here seeks to explore the degree to which the imbalance of the dataset has been treated and to determine the taxonomy of techniques used. In this thesis, a comparison analysis is performed by using a small subset of an entire dataset to determine the threshold sample limit at which the model achieves the highest accuracy. In addition to this analysis, a study was conducted to determine the extent to which each feature of the dataset has an impact on the threshold performance. The study is implemented on the BoT-IoT dataset using three supervised ML classifiers: K-nearest Neighbor, Random Forest, and Logistic Regression. The four principal findings of this thesis are: existing taxonomies are not understood and imbalance of the dataset is not treated; high performance across all metrics is achieved on a highly imbalanced dataset; model is able to achieve the threshold performance using a small subset of samples; certain features had varying impact on the threshold value using different techniques

    Evaluation of machine learning techniques for intrusion detection in software defined networking

    Get PDF
    Abstract. The widespread growth of the Internet paved the way for the need of a new network architecture which was filled by Software Defined Networking (SDN). SDN separated the control and data planes to overcome the challenges that came along with the rapid growth and complexity of the network architecture. However, centralizing the new architecture also introduced new security challenges and created the demand for stronger security measures. The focus is on the Intrusion Detection System (IDS) for a Distributed Denial of Service (DDoS) attack which is a serious threat to the network system. There are several ways of detecting an attack and with the rapid growth of machine learning (ML) and artificial intelligence, the study evaluates several ML algorithms for detecting DDoS attacks on the system. Several factors have an effect on the performance of ML based IDS in SDN. Feature selection, training dataset, and implementation of the classifying models are some of the important factors. The balance between usage of resources and the performance of the implemented model is important. The model implemented in the thesis uses a dataset created from the traffic flow within the system and models being used are Support Vector Machine (SVM), Naive-Bayes, Decision Tree and Logistic Regression. The accuracy of the models has been over 95% apart from Logistic Regression which has 90% accuracy. The ML based algorithm has been more accurate than the non-ML based algorithm. It learns from different features of the traffic flow to differentiate between normal traffic and attack traffic. Most of the previously implemented ML based IDS are based on public datasets. Using a dataset created from the flow of the experimental environment allows training of the model from a real-time dataset. However, the experiment only detects the traffic and does not take any action. However, these promising results can be used for further development of the model

    Machine Learning-Based Anomaly Detection in Cloud Virtual Machine Resource Usage

    Get PDF
    Anomaly detection is an important activity in cloud computing systems because it aids in the identification of odd behaviours or actions that may result in software glitch, security breaches, and performance difficulties. Detecting aberrant resource utilization trends in virtual machines is a typical application of anomaly detection in cloud computing (VMs). Currently, the most serious cyber threat is distributed denial-of-service attacks. The afflicted server\u27s resources and internet traffic resources, such as bandwidth and buffer size, are slowed down by restricting the server\u27s capacity to give resources to legitimate customers. To recognize attacks and common occurrences, machine learning techniques such as Quadratic Support Vector Machines (QSVM), Random Forest, and neural network models such as MLP and Autoencoders are employed. Various machine learning algorithms are used on the optimised NSL-KDD dataset to provide an efficient and accurate predictor of network intrusions. In this research, we propose a neural network based model and experiment on various central and spiral rearrangements of the features for distinguishing between different types of attacks and support our approach of better preservation of feature structure with image representations. The results are analysed and compared to existing models and prior research. The outcomes of this study have practical implications for improving the security and performance of cloud computing systems, specifically in the area of identifying and mitigating network intrusions

    Guarding the Cloud: An Effective Detection of Cloud-Based Cyber Attacks using Machine Learning Algorithms

    Get PDF
    Cloud computing has gained significant popularity due to its reliability and scalability, making it a compelling area of research. However, this technology is not without its challenges, including network connectivity dependencies, downtime, vendor lock-in, limited control, and most importantly, its vulnerability to attacks. Therefore, guarding the cloud is the objective of this paper, which focuses, in a novel approach, on two prevalent cloud attacks: Distributed Denial-of-service (DDoS) attacks and Man-in-the-Cloud (MitC) computing attacks. To tackle the detection of these malicious activities, machine learning algorithms, namely Decision Trees, Support Vector Machine (SVM), Naive Bayes, and K-Nearest Neighbors (KNN), are utilized. Experimental simulations of DDoS and MitC attacks are conducted within a cloud environment, and the resultant data is compiled into a dataset for training and evaluating the machine learning algorithms. The study reveals the effectiveness of these algorithms in accurately identifying and classifying malicious activities, effectively distinguishing them from legitimate network traffic. The finding highlights Decision Trees algorithm with most promising potential of guarding the cloud and mitigating the impact of various cyber threats

    TPAAD: two‐phase authentication system for denial of service attack detection and mitigation using machine learning in software‐defined network.

    Get PDF
    Software-defined networking (SDN) has received considerable attention and adoption owing to its inherent advantages, such as enhanced scalability, increased adaptability, and the ability to exercise centralized control. However, the control plane of the system is vulnerable to denial-of-service (DoS) attacks, which are a primary focus for attackers. These attacks have the potential to result in substantial delays and packet loss. In this study, we present a novel system called Two-Phase Authentication for Attack Detection that aims to enhance the security of SDN by mitigating DoS attacks. The methodology utilized in our study involves the implementation of packet filtration and machine learning classification techniques, which are subsequently followed by the targeted restriction of malevolent network traffic. Instead of completely deactivating the host, the emphasis lies on preventing harmful communication. Support vector machine and K-nearest neighbours algorithms were utilized for efficient detection on the CICDoS 2017 dataset. The deployed model was utilized within an environment designed for the identification of threats in SDN. Based on the observations of the banned queue, our system allows a host to reconnect when it is no longer contributing to malicious traffic. The experiments were run on a VMware Ubuntu, and an SDN environment was created using Mininet and the RYU controller. The results of the tests demonstrated enhanced performance in various aspects, including the reduction of false positives, the minimization of central processing unit utilization and control channel bandwidth consumption, the improvement of packet delivery ratio, and the decrease in the number of flow requests submitted to the controller. These results confirm that our Two-Phase Authentication for Attack Detection architecture identifies and mitigates SDN DoS attacks with low overhead

    Reliable Machine Learning Model for IIoT Botnet Detection

    Get PDF
    Due to the growing number of Internet of Things (IoT) devices, network attacks like denial of service (DoS) and floods are rising for security and reliability issues. As a result of these attacks, IoT devices suffer from denial of service and network disruption. Researchers have implemented different techniques to identify attacks aimed at vulnerable Internet of Things (IoT) devices. In this study, we propose a novel features selection algorithm FGOA-kNN based on a hybrid filter and wrapper selection approaches to select the most relevant features. The novel approach integrated with clustering rank the features and then applies the Grasshopper algorithm (GOA) to minimize the top-ranked features. Moreover, a proposed algorithm, IHHO, selects and adapts the neural network’s hyper parameters to detect botnets efficiently. The proposed Harris Hawks algorithm is enhanced with three improvements to improve the global search process for optimal solutions. To tackle the problem of population diversity, a chaotic map function is utilized for initialization. The escape energy of hawks is updated with a new nonlinear formula to avoid the local minima and better balance between exploration and exploitation. Furthermore, the exploitation phase of HHO is enhanced using a new elite operator ROBL. The proposed model combines unsupervised, clustering, and supervised approaches to detect intrusion behaviors. The N-BaIoT dataset is utilized to validate the proposed model. Many recent techniques were used to assess and compare the proposed model’s performance. The result demonstrates that the proposed model is better than other variations at detecting multiclass botnet attacks

    A new proactive feature selection model based on the enhanced optimization algorithms to detect DRDoS attacks

    Get PDF
    Cyberattacks have grown steadily over the last few years. The distributed reflection denial of service (DRDoS) attack has been rising, a new variant of distributed denial of service (DDoS) attack. DRDoS attacks are more difficult to mitigate due to the dynamics and the attack strategy of this type of attack. The number of features influences the performance of the intrusion detection system by investigating the behavior of traffic. Therefore, the feature selection model improves the accuracy of the detection mechanism also reduces the time of detection by reducing the number of features. The proposed model aims to detect DRDoS attacks based on the feature selection model, and this model is called a proactive feature selection model proactive feature selection (PFS). This model uses a nature-inspired optimization algorithm for the feature subset selection. Three machine learning algorithms, i.e., k-nearest neighbor (KNN), random forest (RF), and support vector machine (SVM), were evaluated as the potential classifier for evaluating the selected features. We have used the CICDDoS2019 dataset for evaluation purposes. The performance of each classifier is compared to previous models. The results indicate that the suggested model works better than the current approaches providing a higher detection rate (DR), a low false-positive rate (FPR), and increased accuracy detection (DA). The PFS model shows better accuracy to detect DRDoS attacks with 89.59%
    corecore