105,920 research outputs found

    Intrusion Detection System for Platooning Connected Autonomous Vehicles

    Get PDF
    The deployment of Connected Autonomous Vehicles (CAVs) in Vehicular Ad Hoc Networks (VANETs) requires secure wireless communication in order to ensure reliable connectivity and safety. However, this wireless communication is vulnerable to a variety of cyber atacks such as spoofing or jamming attacks. In this paper, we describe an Intrusion Detection System (IDS) based on Machine Learning (ML) techniques designed to detect both spoofing and jamming attacks in a CAV environment. The IDS would reduce the risk of traffic disruption and accident caused as a result of cyber-attacks. The detection engine of the presented IDS is based on the ML algorithms Random Forest (RF), k-Nearest Neighbour (k-NN) and One-Class Support Vector Machine (OCSVM), as well as data fusion techniques in a cross-layer approach. To the best of the authors’ knowledge, the proposed IDS is the first in literature that uses a cross-layer approach to detect both spoofing and jamming attacks against the communication of connected vehicles platooning. The evaluation results of the implemented IDS present a high accuracy of over 90% using training datasets containing both known and unknown attacks

    Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

    Get PDF
    Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201

    Security Evaluation of Support Vector Machines in Adversarial Environments

    Full text link
    Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion), or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector Machine Applications

    Efficient Database Generation for Data-driven Security Assessment of Power Systems

    Full text link
    Power system security assessment methods require large datasets of operating points to train or test their performance. As historical data often contain limited number of abnormal situations, simulation data are necessary to accurately determine the security boundary. Generating such a database is an extremely demanding task, which becomes intractable even for small system sizes. This paper proposes a modular and highly scalable algorithm for computationally efficient database generation. Using convex relaxation techniques and complex network theory, we discard large infeasible regions and drastically reduce the search space. We explore the remaining space by a highly parallelizable algorithm and substantially decrease computation time. Our method accommodates numerous definitions of power system security. Here we focus on the combination of N-k security and small-signal stability. Demonstrating our algorithm on IEEE 14-bus and NESTA 162-bus systems, we show how it outperforms existing approaches requiring less than 10% of the time other methods require.Comment: Database publicly available at: https://github.com/johnnyDEDK/OPs_Nesta162Bus - Paper accepted for publication at IEEE Transactions on Power System
    corecore