444 research outputs found

    Automatic Building of a Powerful IDS for The Cloud Based on Deep Neural Network by Using a Novel Combination of Simulated Annealing Algorithm and Improved Self- Adaptive Genetic Algorithm

    Get PDF
    Cloud computing (CC) is the fastest-growing data hosting and computational technology that stands today as a satisfactory answer to the problem of data storage and computing. Thereby, most organizations are now migratingtheir services into the cloud due to its appealing features and its tangible advantages. Nevertheless, providing privacy and security to protect cloud assets and resources still a very challenging issue. To address the aboveissues, we propose a smart approach to construct automatically an efficient and effective anomaly network IDS based on Deep Neural Network, by using a novel hybrid optimization framework “ISAGASAA”. ISAGASAA framework combines our new self-adaptive heuristic search algorithm called “Improved Self-Adaptive Genetic Algorithm” (ISAGA) and Simulated Annealing Algorithm (SAA). Our approach consists of using ISAGASAA with the aim of seeking the optimal or near optimal combination of most pertinent values of the parametersincluded in building of DNN based IDS or impacting its performance, which guarantee high detection rate, high accuracy and low false alarm rate. The experimental results turn out the capability of our IDS to uncover intrusionswith high detection accuracy and low false alarm rate, and demonstrate its superiority in comparison with stateof-the-art methods

    Deep Transfer Learning Applications in Intrusion Detection Systems: A Comprehensive Review

    Full text link
    Globally, the external Internet is increasingly being connected to the contemporary industrial control system. As a result, there is an immediate need to protect the network from several threats. The key infrastructure of industrial activity may be protected from harm by using an intrusion detection system (IDS), a preventive measure mechanism, to recognize new kinds of dangerous threats and hostile activities. The most recent artificial intelligence (AI) techniques used to create IDS in many kinds of industrial control networks are examined in this study, with a particular emphasis on IDS-based deep transfer learning (DTL). This latter can be seen as a type of information fusion that merge, and/or adapt knowledge from multiple domains to enhance the performance of the target task, particularly when the labeled data in the target domain is scarce. Publications issued after 2015 were taken into account. These selected publications were divided into three categories: DTL-only and IDS-only are involved in the introduction and background, and DTL-based IDS papers are involved in the core papers of this review. Researchers will be able to have a better grasp of the current state of DTL approaches used in IDS in many different types of networks by reading this review paper. Other useful information, such as the datasets used, the sort of DTL employed, the pre-trained network, IDS techniques, the evaluation metrics including accuracy/F-score and false alarm rate (FAR), and the improvement gained, were also covered. The algorithms, and methods used in several studies, or illustrate deeply and clearly the principle in any DTL-based IDS subcategory are presented to the reader

    Proceedings, MSVSCC 2018

    Get PDF
    Proceedings of the 12th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 19, 2018 at VMASC in Suffolk, Virginia. 155 pp

    Deep Learning -Powered Computational Intelligence for Cyber-Attacks Detection and Mitigation in 5G-Enabled Electric Vehicle Charging Station

    Get PDF
    An electric vehicle charging station (EVCS) infrastructure is the backbone of transportation electrification. However, the EVCS has various cyber-attack vulnerabilities in software, hardware, supply chain, and incumbent legacy technologies such as network, communication, and control. Therefore, proactively monitoring, detecting, and defending against these attacks is very important. The state-of-the-art approaches are not agile and intelligent enough to detect, mitigate, and defend against various cyber-physical attacks in the EVCS system. To overcome these limitations, this dissertation primarily designs, develops, implements, and tests the data-driven deep learning-powered computational intelligence to detect and mitigate cyber-physical attacks at the network and physical layers of 5G-enabled EVCS infrastructure. Also, the 5G slicing application to ensure the security and service level agreement (SLA) in the EVCS ecosystem has been studied. Various cyber-attacks such as distributed denial of services (DDoS), False data injection (FDI), advanced persistent threats (APT), and ransomware attacks on the network in a standalone 5G-enabled EVCS environment have been considered. Mathematical models for the mentioned cyber-attacks have been developed. The impact of cyber-attacks on the EVCS operation has been analyzed. Various deep learning-powered intrusion detection systems have been proposed to detect attacks using local electrical and network fingerprints. Furthermore, a novel detection framework has been designed and developed to deal with ransomware threats in high-speed, high-dimensional, multimodal data and assets from eccentric stakeholders of the connected automated vehicle (CAV) ecosystem. To mitigate the adverse effects of cyber-attacks on EVCS controllers, novel data-driven digital clones based on Twin Delayed Deep Deterministic Policy Gradient (TD3) Deep Reinforcement Learning (DRL) has been developed. Also, various Bruteforce, Controller clones-based methods have been devised and tested to aid the defense and mitigation of the impact of the attacks of the EVCS operation. The performance of the proposed mitigation method has been compared with that of a benchmark Deep Deterministic Policy Gradient (DDPG)-based digital clones approach. Simulation results obtained from the Python, Matlab/Simulink, and NetSim software demonstrate that the cyber-attacks are disruptive and detrimental to the operation of EVCS. The proposed detection and mitigation methods are effective and perform better than the conventional and benchmark techniques for the 5G-enabled EVCS

    Automated Test Selection for Android Apps Based on APK and Activity Classification

    Get PDF
    Several techniques exist for mobile test automation, from script-based techniques to automated test generation based on GUI models. Most techniques fall short in being adopted extensively by practitioners because of the very costly definition (and maintenance) of test cases. We present a novel testing framework for Android apps that allows a developer to write effective test scripts without having to know the implementation details and the user interface of the app under test. The main goal of the framework is to generate adaptive tests that can be executed on a significant number of apps, or different releases of the same app, without manual editing of the tests. The frameworks consists of: (1) a Test Scripting Language, that allows the tester to write generic test scripts tailored to activity and app categories; (2) a State Graph Modeler, that creates a model of the app’s GUI, identifying activities (i.e., screens) and widgets; (3) an app classifier that determines the type of application under test; (4) an activity classifier that determines the purpose of each screen; (5) a test adapter that executes test scripts that are compatible with the specific app and activity, automatically tailoring the test scripts to the classes of the app and the activities under test. We evaluated empirically the components of our testing framework. The classifiers were able to outperform available approaches in the literature. The developed testing framework was able to correctly adapt high-level test cases to 28 out of 32 applications, and to reduce the LOCs of the test scripts of around 90%. We conclude that machine learning can be fruitfully applied to the creation of high-level, adaptive test cases for Android apps. Our framework is modular in nature and allows expansions through the addition of new commands to be executed on the classified apps and activities

    Topology Optimization via Machine Learning and Deep Learning: A Review

    Full text link
    Topology optimization (TO) is a method of deriving an optimal design that satisfies a given load and boundary conditions within a design domain. This method enables effective design without initial design, but has been limited in use due to high computational costs. At the same time, machine learning (ML) methodology including deep learning has made great progress in the 21st century, and accordingly, many studies have been conducted to enable effective and rapid optimization by applying ML to TO. Therefore, this study reviews and analyzes previous research on ML-based TO (MLTO). Two different perspectives of MLTO are used to review studies: (1) TO and (2) ML perspectives. The TO perspective addresses "why" to use ML for TO, while the ML perspective addresses "how" to apply ML to TO. In addition, the limitations of current MLTO research and future research directions are examined

    Towards Effectiveness Of Detecting IoT Botnet Attacks Using Supervised Machine Learning

    Get PDF
    With the rapid explosion of Internet of Things (IoT), IoT enabled devices have reached every corner of the globe, connecting billions of devices to the global internet. Botnets remain as one that can profit the most from IoT security susceptibility among other threats. Machine Learning based techniques to detect malicious traffic has been a widely researched topic in the last few years. Most of the studies simply use these techniques in an isolated fashion, where the machine learning algorithms are trained and tested on the same stream of labeled traffic. This study will attempt to analyze the use of supervised machine learning technique from practical aspects. To be practical and effective, a machine learning algorithm should be able to go beyond that and be able to classify traffic coming from a different sources, networks or software. Moreover we see that network traffic based datasets are usually in the order of millions of records which are then used to train the machine learning models. This study will also explore the impact of training size to find out how much data is actually required to train an effective machine learning model

    Deep Learning for Phishing Detection: Taxonomy, Current Challenges and Future Directions

    Get PDF
    This work was supported in part by the Ministry of Higher Education under the Fundamental Research Grant Scheme under Grant FRGS/1/2018/ICT04/UTM/01/1; and in part by the Faculty of Informatics and Management, University of Hradec Kralove, through SPEV project under Grant 2102/2022.Phishing has become an increasing concern and captured the attention of end-users as well as security experts. Existing phishing detection techniques still suffer from the de ciency in performance accuracy and inability to detect unknown attacks despite decades of development and improvement. Motivated to solve these problems, many researchers in the cybersecurity domain have shifted their attention to phishing detection that capitalizes on machine learning techniques. Deep learning has emerged as a branch of machine learning that becomes a promising solution for phishing detection in recent years. As a result, this study proposes a taxonomy of deep learning algorithm for phishing detection by examining 81 selected papers using a systematic literature review approach. The paper rst introduces the concept of phishing and deep learning in the context of cybersecurity. Then, taxonomies of phishing detection and deep learning algorithm are provided to classify the existing literature into various categories. Next, taking the proposed taxonomy as a baseline, this study comprehensively reviews the state-of-the-art deep learning techniques and analyzes their advantages as well as disadvantages. Subsequently, the paper discusses various issues that deep learning faces in phishing detection and proposes future research directions to overcome these challenges. Finally, an empirical analysis is conducted to evaluate the performance of various deep learning techniques in a practical context, and to highlight the related issues that motivate researchers in their future works. The results obtained from the empirical experiment showed that the common issues among most of the state-of-the-art deep learning algorithms are manual parameter-tuning, long training time, and de cient detection accuracy.Ministry of Higher Education under the Fundamental Research Grant Scheme FRGS/1/2018/ICT04/UTM/01/1Faculty of Informatics and Management, University of Hradec Kralove, through SPEV project 2102/202

    Sustainable Agriculture and Advances of Remote Sensing (Volume 2)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publication of the results, among others

    MACHINE LEARNING AND DEEP LEARNING APPROACHES FOR GENE REGULATORY NETWORK INFERENCE IN PLANT SPECIES

    Get PDF
    The construction of gene regulatory networks (GRNs) is vital for understanding the regulation of metabolic pathways, biological processes, and complex traits during plant growth and responses to environmental cues and stresses. The increasing availability of public databases has facilitated the development of numerous methods for inferring gene regulatory relationships between transcription factors and their targets. However, there is limited research on supervised learning techniques that utilize available regulatory relationships of plant species in public databases. This study investigates the potential of machine learning (ML), deep learning (DL), and hybrid approaches for constructing GRNs in plant species, specifically Arabidopsis thaliana, poplar, and maize. Challenges arise due to limited training data for gene regulatory pairs, especially in less-studied species such as poplar and maize. Nonetheless, our results demonstrate that hybrid models integrating ML and artificial neural network (ANN) techniques significantly outperformed traditional methods in predicting gene regulatory relationships. The best-performing hybrid models achieved over 95% accuracy on holdout test datasets, surpassing traditional ML and ANN models and also showed good accuracy on lignin biosynthesis pathway analysis. Employing transfer learning techniques, this study has also successfully transferred the known knowledge of gene regulation from one species to another, substantially improving performance and manifesting the viability of cross-species learning using deep learning-based approaches. This study contributes to the methodology for growing body of knowledge in GRN prediction and construction for plant species, highlighting the value of adopting hybrid models and transfer learning techniques. This study and the results will help to pave a way for future research on how to learn from known to unknown and will be conductive to the advance of modern genomics and bioinformatics
    • …
    corecore