836 research outputs found

    Enhanced Study of Deep Learning Algorithms for Web Vulnerability Scanner

    Get PDF
    The detection of online vulnerabilities is the most important task for network security. In this paper, deep learning methodologies for dealing with tough or complicated challenges are investigated using convolutional neural networks, long-short-term memory, and generative adversarial networks.Experimental results demonstrate that deep learning approaches can significantly outperform standard methods when compared to them. In addition, we examine the various aspects that affect performance. This work can provide researchers with useful direction when designing network architecture and parameters for identifying web attacks

    The Emerging Threat of Ai-driven Cyber Attacks: A Review

    Get PDF
    Cyberattacks are becoming more sophisticated and ubiquitous. Cybercriminals are inevitably adopting Artificial Intelligence (AI) techniques to evade the cyberspace and cause greater damages without being noticed. Researchers in cybersecurity domain have not researched the concept behind AI-powered cyberattacks enough to understand the level of sophistication this type of attack possesses. This paper aims to investigate the emerging threat of AI-powered cyberattacks and provide insights into malicious used of AI in cyberattacks. The study was performed through a three-step process by selecting only articles based on quality, exclusion, and inclusion criteria that focus on AI-driven cyberattacks. Searches in ACM, arXiv Blackhat, Scopus, Springer, MDPI, IEEE Xplore and other sources were executed to retrieve relevant articles. Out of the 936 papers that met our search criteria, a total of 46 articles were finally selected for this study. The result shows that 56% of the AI-Driven cyberattack technique identified was demonstrated in the access and penetration phase, 12% was demonstrated in exploitation, and command and control phase, respectively; 11% was demonstrated in the reconnaissance phase; 9% was demonstrated in the delivery phase of the cybersecurity kill chain. The findings in this study shows that existing cyber defence infrastructures will become inadequate to address the increasing speed, and complex decision logic of AI-driven attacks. Hence, organizations need to invest in AI cybersecurity infrastructures to combat these emerging threats.publishedVersio

    GAN-CAN: A Novel Attack to Behavior-Based Driver Authentication Systems

    Get PDF
    openFor many years, car keys have been the sole mean of authentication in vehicles. Whether the access control process is physical or wireless, entrusting the ownership of a vehicle to a single token is prone to stealing attempts. Modern vehicles equipped with the Controller Area Network (CAN) bus technology collects a wealth of sensor data in real-time, covering aspects such as the vehicle, environment, and driver. This data can be processed and analyzed to gain valuable insights and solutions for human behavior analysis. For this reason, many researchers started developing behavior-based authentication systems. Many Machine Learning (ML) and Deep Learning models (DL) have been explored for behavior-based driver authentication, but the emphasis on security has not been a primary focus in the design of these systems. By collecting data in a moving vehicle, DL models can recognize patterns in the data and identify drivers based on their driving behavior. This can be used as an anti-theft system, as a thief would exhibit a different driving style compared to the vehicle owner. However, the assumption that an attacker cannot replicate the legitimate driver behavior falls under certain conditions. In this thesis, we propose GAN-CAN, the first attack capable of fooling state-of-the-art behavior-based driver authentication systems in a vehicle. Based on the adversary's knowledge, we propose different GAN-CAN implementations. Our attack leverages the lack of security in the CAN bus to inject suitably designed time-series data to mimic the legitimate driver. Our malicious time series data is generated through the integration of a modified reinforcement learning technique with Generative Adversarial Networks (GANs) with adapted training process. Furthermore we conduct a thorough investigation into the safety implications of the injected values throughout the attack. This meticulous study is conducted to guarantee that the introduced values do not in any way undermine the safety of the vehicle and the individuals inside it. Also, we formalize a real-world implementation of a driver authentication system considering possible vulnerabilities and exploits. We tested GAN-CAN in an improved version of the most efficient driver behavior-based authentication model in the literature. We prove that our attack can fool it with an attack success rate of up to 99%. We show how an attacker, without prior knowledge of the authentication system, can steal a car by deploying GAN-CAN in an off-the-shelf system in under 22 minutes. Moreover, by considering the safety importance of the injected values, we demonstrate that GAN-CAN can successfully deceive the authentication system without compromising the overall safety of the vehicle. This highlights the urgent need to address the security vulnerabilities present in behavior-based driver authentication systems. In the end, we suggest some possible countermeasures to the GAN-CAN attack.For many years, car keys have been the sole mean of authentication in vehicles. Whether the access control process is physical or wireless, entrusting the ownership of a vehicle to a single token is prone to stealing attempts. Modern vehicles equipped with the Controller Area Network (CAN) bus technology collects a wealth of sensor data in real-time, covering aspects such as the vehicle, environment, and driver. This data can be processed and analyzed to gain valuable insights and solutions for human behavior analysis. For this reason, many researchers started developing behavior-based authentication systems. Many Machine Learning (ML) and Deep Learning models (DL) have been explored for behavior-based driver authentication, but the emphasis on security has not been a primary focus in the design of these systems. By collecting data in a moving vehicle, DL models can recognize patterns in the data and identify drivers based on their driving behavior. This can be used as an anti-theft system, as a thief would exhibit a different driving style compared to the vehicle owner. However, the assumption that an attacker cannot replicate the legitimate driver behavior falls under certain conditions. In this thesis, we propose GAN-CAN, the first attack capable of fooling state-of-the-art behavior-based driver authentication systems in a vehicle. Based on the adversary's knowledge, we propose different GAN-CAN implementations. Our attack leverages the lack of security in the CAN bus to inject suitably designed time-series data to mimic the legitimate driver. Our malicious time series data is generated through the integration of a modified reinforcement learning technique with Generative Adversarial Networks (GANs) with adapted training process. Furthermore we conduct a thorough investigation into the safety implications of the injected values throughout the attack. This meticulous study is conducted to guarantee that the introduced values do not in any way undermine the safety of the vehicle and the individuals inside it. Also, we formalize a real-world implementation of a driver authentication system considering possible vulnerabilities and exploits. We tested GAN-CAN in an improved version of the most efficient driver behavior-based authentication model in the literature. We prove that our attack can fool it with an attack success rate of up to 99%. We show how an attacker, without prior knowledge of the authentication system, can steal a car by deploying GAN-CAN in an off-the-shelf system in under 22 minutes. Moreover, by considering the safety importance of the injected values, we demonstrate that GAN-CAN can successfully deceive the authentication system without compromising the overall safety of the vehicle. This highlights the urgent need to address the security vulnerabilities present in behavior-based driver authentication systems. In the end, we suggest some possible countermeasures to the GAN-CAN attack

    Semantically Selective Augmentation for Deep Compact Person Re-Identification

    Get PDF
    We present a deep person re-identification approach that combines semantically selective, deep data augmentation with clustering-based network compression to generate high performance, light and fast inference networks. In particular, we propose to augment limited training data via sampling from a deep convolutional generative adversarial network (DCGAN), whose discriminator is constrained by a semantic classifier to explicitly control the domain specificity of the generation process. Thereby, we encode information in the classifier network which can be utilized to steer adversarial synthesis, and which fuels our CondenseNet ID-network training. We provide a quantitative and qualitative analysis of the approach and its variants on a number of datasets, obtaining results that outperform the state-of-the-art on the LIMA dataset for long-term monitoring in indoor living spaces

    DMRN+16: Digital Music Research Network One-day Workshop 2021

    Get PDF
    DMRN+16: Digital Music Research Network One-day Workshop 2021 Queen Mary University of London Tuesday 21st December 2021 Keynote speakers Keynote 1. Prof. Sophie Scott -Director, Institute of Cognitive Neuroscience, UCL. Title: "Sound on the brain - insights from functional neuroimaging and neuroanatomy" Abstract In this talk I will use functional imaging and models of primate neuroanatomy to explore how sound is processed in the human brain. I will demonstrate that sound is represented cortically in different parallel streams. I will expand this to show how this can impact on the concept of auditory perception, which arguably incorporates multiple kinds of distinct perceptual processes. I will address the roles that subcortical processes play in this, and also the contributions from hemispheric asymmetries. Keynote 2: Prof. Gus Xia - Assistant Professor at NYU Shanghai Title: "Learning interpretable music representations: from human stupidity to artificial intelligence" Abstract Gus has been leading the Music X Lab in developing intelligent systems that help people better compose and learn music. In this talk, he will show us the importance of music representation for both humans and machines, and how to learn better music representations via the design of inductive bias. Once we got interpretable music representations, the potential applications are limitless
    • …
    corecore