1,852 research outputs found

    Deep learning : enhancing the security of software-defined networks

    Get PDF
    Software-defined networking (SDN) is a communication paradigm that promotes network flexibility and programmability by separating the control plane from the data plane. SDN consolidates the logic of network devices into a single entity known as the controller. SDN raises significant security challenges related to its architecture and associated characteristics such as programmability and centralisation. Notably, security flaws pose a risk to controller integrity, confidentiality and availability. The SDN model introduces separation of the forwarding and control planes. It detaches the control logic from switching and routing devices, forming a central plane or network controller that facilitates communications between applications and devices. The architecture enhances network resilience, simplifies management procedures and supports network policy enforcement. However, it is vulnerable to new attack vectors that can target the controller. Current security solutions rely on traditional measures such as firewalls or intrusion detection systems (IDS). An IDS can use two different approaches: signature-based or anomaly-based detection. The signature-based approach is incapable of detecting zero-day attacks, while anomaly-based detection has high false-positive and false-negative alarm rates. Inaccuracies related to false-positive attacks may have significant consequences, specifically from threats that target the controller. Thus, improving the accuracy of the IDS will enhance controller security and, subsequently, SDN security. A centralised network entity that controls the entire network is a primary target for intruders. The controller is located at a central point between the applications and the data plane and has two interfaces for plane communications, known as northbound and southbound, respectively. Communications between the controller, the application and data planes are prone to various types of attacks, such as eavesdropping and tampering. The controller software is vulnerable to attacks such as buffer and stack overflow, which enable remote code execution that can result in attackers taking control of the entire network. Additionally, traditional network attacks are more destructive. This thesis introduces a threat detection approach aimed at improving the accuracy and efficiency of the IDS, which is essential for controller security. To evaluate the effectiveness of the proposed framework, an empirical study of SDN controller security was conducted to identify, formalise and quantify security concerns related to SDN architecture. The study explored the threats related to SDN architecture, specifically threats originating from the existence of the control plane. The framework comprises two stages, involving the use of deep learning (DL) algorithms and clustering algorithms, respectively. DL algorithms were used to reduce the dimensionality of inputs, which were forwarded to clustering algorithms in the second stage. Features were compressed to a single value, simplifying and improving the performance of the clustering algorithm. Rather than using the output of the neural network, the framework presented a unique technique for dimensionality reduction that used a single value—reconstruction error—for the entire input record. The use of a DL algorithm in the pre-training stage contributed to solving the problem of dimensionality related to k-means clustering. Using unsupervised algorithms facilitated the discovery of new attacks. Further, this study compares generative energy-based models (restricted Boltzmann machines) with non-probabilistic models (autoencoders). The study implements TensorFlow in four scenarios. Simulation results were statistically analysed using a confusion matrix, which was evaluated and compared with similar related works. The proposed framework, which was adapted from existing similar approaches, resulted in promising outcomes and may provide a robust prospect for deployment in modern threat detection systems in SDN. The framework was implemented using TensorFlow and was benchmarked to the KDD99 dataset. Simulation results showed that the use of the DL algorithm to reduce dimensionality significantly improved detection accuracy and reduced false-positive and false-negative alarm rates. Extensive simulation studies on benchmark tasks demonstrated that the proposed framework consistently outperforms all competing approaches. This improvement is a further step towards the development of a reliable IDS to enhance the security of SDN controllers

    A novel algorithm for software defined networks model to enhance the quality of services and scalability in wireless network

    Get PDF
    Software defined networks (SDN) have replaced the traditional network architecture by separating the control from forwarding planes. SDN technology utilizes computer resources to provide worldwide effective service than the aggregation of single internet resources usage. Breakdown while resource allocation is a major concern in cloud computing due to the diverse and highly complex architecture of resources. These resources breakdowns cause delays in job completion and have a negative influence on attaining quality of service (QoS). In order to promote error-free task scheduling, this study represents a promising fault-tolerance scheduling technique. For optimum QoS, the suggested restricted Boltzmann machine (RBM) approach takes into account the most important characteristics like current consumption of the resources and rate of failure. The proposed approach's efficiency is verified using the MATLAB toolbox by employing widely used measures such as resource consumption, average processing time, throughput and rate of success

    Effective and Secure Healthcare Machine Learning System with Explanations Based on High Quality Crowdsourcing Data

    Get PDF
    Affordable cloud computing technologies allow users to efficiently outsource, store, and manage their Personal Health Records (PHRs) and share with their caregivers or physicians. With this exponential growth of the stored large scale clinical data and the growing need for personalized care, researchers are keen on developing data mining methodologies to learn efficient hidden patterns in such data. While studies have shown that those progresses can significantly improve the performance of various healthcare applications for clinical decision making and personalized medicine, the collected medical datasets are highly ambiguous and noisy. Thus, it is essential to develop a better tool for disease progression and survival rate predictions, where dataset needs to be cleaned before it is used for predictions and useful feature selection techniques need to be employed before prediction models can be constructed. In addition, having predictions without explanations prevent medical personnel and patients from adopting such healthcare deep learning models. Thus, any prediction models must come with some explanations. Finally, despite the efficiency of machine learning systems and their outstanding prediction performance, it is still a risk to reuse pre-trained models since most machine learning modules that are contributed and maintained by third parties lack proper checking to ensure that they are robust to various adversarial attacks. We need to design mechanisms for detection such attacks. In this thesis, we focus on addressing all the above issues: (i) Privacy Preserving Disease Treatment & Complication Prediction System (PDTCPS): A privacy-preserving disease treatment, complication prediction scheme (PDTCPS) is proposed, which allows authorized users to conduct searches for disease diagnosis, personalized treatments, and prediction of potential complications. (ii) Incentivizing High Quality Crowdsourcing Data For Disease Prediction: A new incentive model with individual rationality and platform profitability features is developed to encourage different hospitals to share high quality data so that better prediction models can be constructed. We also explore how data cleaning and feature selection techniques affect the performance of the prediction models. (iii) Explainable Deep Learning Based Medical Diagnostic System: A deep learning based medical diagnosis system (DL-MDS) is present which integrates heterogeneous medical data sources to produce better disease diagnosis with explanations for authorized users who submit their personalized health related queries. (iv) Attacks on RNN based Healthcare Learning Systems and Their Detection & Defense Mechanisms: Potential attacks on Recurrent Neural Network (RNN) based ML systems are identified and low-cost detection & defense schemes are designed to prevent such adversarial attacks. Finally, we conduct extensive experiments using both synthetic and real-world datasets to validate the feasibility and practicality of our proposed systems

    Intelligent Computing for Big Data

    Get PDF
    Recent advances in artificial intelligence have the potential to further develop current big data research. The Special Issue on ‘Intelligent Computing for Big Data’ highlighted a number of recent studies related to the use of intelligent computing techniques in the processing of big data for text mining, autism diagnosis, behaviour recognition, and blockchain-based storage

    Machine Learning Algorithms for Smart Data Analysis in Internet of Things Environment: Taxonomies and Research Trends

    Get PDF
    Machine learning techniques will contribution towards making Internet of Things (IoT) symmetric applications among the most significant sources of new data in the future. In this context, network systems are endowed with the capacity to access varieties of experimental symmetric data across a plethora of network devices, study the data information, obtain knowledge, and make informed decisions based on the dataset at its disposal. This study is limited to supervised and unsupervised machine learning (ML) techniques, regarded as the bedrock of the IoT smart data analysis. This study includes reviews and discussions of substantial issues related to supervised and unsupervised machine learning techniques, highlighting the advantages and limitations of each algorithm, and discusses the research trends and recommendations for further study

    IoT Crawler with Behavior Analyzer at Fog layer for Detecting Malicious Nodes

    Get PDF
    The limitations in terms of power and processing in IoT (Internet of Things) nodes make nodes an easy prey for malicious attacks, thus threatening business and industry. Detecting malicious nodes before they trigger an attack is highly recommended. The paper introduces a special purpose IoT crawler that works as an inspector to catch malicious nodes. This crawler is deployed in the Fog layer to inherit its capabilities, and to be an intermediate connection between the things and the cloud computing nodes. The crawler collects data streams from IoT nodes, upon a priority criterion. A behavior analyzer, with a machine learning core, detects malicious nodes according to the extracted node behavior from the crawler collected data streams. The performance of the behavior analyzer was investigated using three machine learning algorithms: Adaboost, Random forest and Extra tree. The behavior analyzer produces better testing accuracy, for the tested data, when using Extra tree compared to Adaboost and Random forest; it achieved 98.3% testing accuracy with Extra tree

    A comparison of statistical machine learning methods in heartbeat detection and classification

    Get PDF
    In health care, patients with heart problems require quick responsiveness in a clinical setting or in the operating theatre. Towards that end, automated classification of heartbeats is vital as some heartbeat irregularities are time consuming to detect. Therefore, analysis of electro-cardiogram (ECG) signals is an active area of research. The methods proposed in the literature depend on the structure of a heartbeat cycle. In this paper, we use interval and amplitude based features together with a few samples from the ECG signal as a feature vector. We studied a variety of classification algorithms focused especially on a type of arrhythmia known as the ventricular ectopic fibrillation (VEB). We compare the performance of the classifiers against algorithms proposed in the literature and make recommendations regarding features, sampling rate, and choice of the classifier to apply in a real-time clinical setting. The extensive study is based on the MIT-BIH arrhythmia database. Our main contribution is the evaluation of existing classifiers over a range sampling rates, recommendation of a detection methodology to employ in a practical setting, and extend the notion of a mixture of experts to a larger class of algorithms
    corecore