717 research outputs found

    Generative Neural Network-Based Defense Methods Against Cyberattacks for Connected and Autonomous Vehicles

    Get PDF
    The rapid advancement of communication and artificial intelligence technologies is propelling the development of connected and autonomous vehicles (CAVs), revolutionizing the transportation landscape. However, increased connectivity and automation also present heightened potential for cyber threats. Recently, the emergence of generative neural networks (NNs) has unveiled a myriad of opportunities for complementing CAV applications, including generative NN-based cybersecurity measures to protect the CAVs in a transportation cyber-physical system (TCPS) from known and unknown cyberattacks. The goal of this dissertation is to explore the utility of the generative NNs for devising cyberattack detection and mitigation strategies for CAVs. To this end, the author developed (i) a hybrid quantum-classical restricted Boltzmann machine (RBM)-based framework for in-vehicle network intrusion detection for connected vehicles and (ii) a generative adversarial network (GAN)-based defense method for the traffic sign classification system within the perception module of autonomous vehicles. The author evaluated the hybrid quantum-classical RBM-based intrusion detection framework on three separate real-world Fuzzy attack datasets and compared its performance with a similar but classical-only approach (i.e., a classical computer-based data preprocessing and RBM training). The results showed that the hybrid quantum-classical RBM-based intrusion detection framework achieved an average intrusion detection accuracy of 98%, whereas the classical-only approach achieved an average accuracy of 90%. For the second study, the author evaluated the GAN-based adversarial defense method for traffic sign classification against different white-box adversarial attacks, such as the fast gradient sign method, the DeepFool, the Carlini and Wagner, and the projected gradient descent attacks. The author compared the performance of the GAN-based defense method with several traditional benchmark defense methods, such as Gaussian augmentation, JPEG compression, feature squeezing, and spatial smoothing. The findings indicated that the GAN-based adversarial defense method for traffic sign classification outperformed all the benchmark defense methods under all the white-box adversarial attacks the author considered for evaluation. Thus, the contribution of this dissertation lies in utilizing the generative ability of existing generative NNs to develop novel high-performing cyberattack detection and mitigation strategies that are feasible to deploy in CAVs in a TCPS environment

    Human-assisted self-supervised labeling of large data sets

    Get PDF
    There is a severe demand for, and shortage of, large accurately labeled datasets to train supervised computational intelligence (CI) algorithms in domains like unmanned aerial systems (UAS) and autonomous vehicles. This has hindered our ability to develop and deploy various computer vision algorithms in/across environments and niche domains for tasks like detection, localization, and tracking. Herein, I propose a new human-in-the-loop (HITL) based growing neural gas (GNG) algorithm to minimize human intervention during labeling large UAS data collections over a shared geospatial area. Specifically, I address human driven events like new class identification and mistake correction. I also address algorithm-centric operations like new pattern discovery and self-supervised labeling. Pattern discovery and identification through self-supervised labeling is made possible through open set recognition (OSR). Herein, I propose a classifier with the ability to say "I don't know" to identify outliers in the data and bootstrap deep learning (DL) models, specifically convolutional neural networks (CNNs), with the ability to classify on N+1 classes. The effectiveness of the algorithms are demonstrated using simulated realistic ray-traced low altitude UAS data from the Unreal Engine. The results show that it is possible to increase speed and reduce mental fatigue over hand labeling large image datasets.Includes bibliographical references

    Landing AI on Networks: An equipment vendor viewpoint on Autonomous Driving Networks

    Full text link
    The tremendous achievements of Artificial Intelligence (AI) in computer vision, natural language processing, games and robotics, has extended the reach of the AI hype to other fields: in telecommunication networks, the long term vision is to let AI fully manage, and autonomously drive, all aspects of network operation. In this industry vision paper, we discuss challenges and opportunities of Autonomous Driving Network (ADN) driven by AI technologies. To understand how AI can be successfully landed in current and future networks, we start by outlining challenges that are specific to the networking domain, putting them in perspective with advances that AI has achieved in other fields. We then present a system view, clarifying how AI can be fitted in the network architecture. We finally discuss current achievements as well as future promises of AI in networks, mentioning a roadmap to avoid bumps in the road that leads to true large-scale deployment of AI technologies in networks

    Tiny Machine Learning Environment: Enabling Intelligence on Constrained Devices

    Get PDF
    Running machine learning algorithms (ML) on constrained devices at the extreme edge of the network is problematic due to the computational overhead of ML algorithms, available resources on the embedded platform, and application budget (i.e., real-time requirements, power constraints, etc.). This required the development of specific solutions and development tools for what is now referred to as TinyML. In this dissertation, we focus on improving the deployment and performance of TinyML applications, taking into consideration the aforementioned challenges, especially memory requirements. This dissertation contributed to the construction of the Edge Learning Machine environment (ELM), a platform-independent open-source framework that provides three main TinyML services, namely shallow ML, self-supervised ML, and binary deep learning on constrained devices. In this context, this work includes the following steps, which are reflected in the thesis structure. First, we present the performance analysis of state-of-the-art shallow ML algorithms including dense neural networks, implemented on mainstream microcontrollers. The comprehensive analysis in terms of algorithms, hardware platforms, datasets, preprocessing techniques, and configurations shows similar performance results compared to a desktop machine and highlights the impact of these factors on overall performance. Second, despite the assumption that TinyML only permits models inference provided by the scarcity of resources, we have gone a step further and enabled self-supervised on-device training on microcontrollers and tiny IoT devices by developing the Autonomous Edge Pipeline (AEP) system. AEP achieves comparable accuracy compared to the typical TinyML paradigm, i.e., models trained on resource-abundant devices and then deployed on microcontrollers. Next, we present the development of a memory allocation strategy for convolutional neural networks (CNNs) layers, that optimizes memory requirements. This approach reduces the memory footprint without affecting accuracy nor latency. Moreover, e-skin systems share the main requirements of the TinyML fields: enabling intelligence with low memory, low power consumption, and low latency. Therefore, we designed an efficient Tiny CNN architecture for e-skin applications. The architecture leverages the memory allocation strategy presented earlier and provides better performance than existing solutions. A major contribution of the thesis is given by CBin-NN, a library of functions for implementing extremely efficient binary neural networks on constrained devices. The library outperforms state of the art NN deployment solutions by drastically reducing memory footprint and inference latency. All the solutions proposed in this thesis have been implemented on representative devices and tested in relevant applications, of which results are reported and discussed. The ELM framework is open source, and this work is clearly becoming a useful, versatile toolkit for the IoT and TinyML research and development community

    Artificial Intelligence and Machine Learning in Cybersecurity: Applications, Challenges, and Opportunities for MIS Academics

    Get PDF
    The availability of massive amounts of data, fast computers, and superior machine learning (ML) algorithms has spurred interest in artificial intelligence (AI). It is no surprise, then, that we observe an increase in the application of AI in cybersecurity. Our survey of AI applications in cybersecurity shows most of the present applications are in the areas of malware identification and classification, intrusion detection, and cybercrime prevention. We should, however, be aware that AI-enabled cybersecurity is not without its drawbacks. Challenges to AI solutions include a shortage of good quality data to train machine learning models, the potential for exploits via adversarial AI/ML, and limited human expertise in AI. However, the rewards in terms of increased accuracy of cyberattack predictions, faster response to cyberattacks, and improved cybersecurity make it worthwhile to overcome these challenges. We present a summary of the current research on the application of AI and ML to improve cybersecurity, challenges that need to be overcome, and research opportunities for academics in management information systems
    corecore