7 research outputs found

    Adversarial Machine Learning-Based Anticipation of Threats Against Vehicle-to-Microgrid Services

    Full text link
    In this paper, we study the expanding attack surface of Adversarial Machine Learning (AML) and the potential attacks against Vehicle-to-Microgrid (V2M) services. We present an anticipatory study of a multi-stage gray-box attack that can achieve a comparable result to a white-box attack. Adversaries aim to deceive the targeted Machine Learning (ML) classifier at the network edge to misclassify the incoming energy requests from microgrids. With an inference attack, an adversary can collect real-time data from the communication between smart microgrids and a 5G gNodeB to train a surrogate (i.e., shadow) model of the targeted classifier at the edge. To anticipate the associated impact of an adversary's capability to collect real-time data instances, we study five different cases, each representing different amounts of real-time data instances collected by an adversary. Out of six ML models trained on the complete dataset, K-Nearest Neighbour (K-NN) is selected as the surrogate model, and through simulations, we demonstrate that the multi-stage gray-box attack is able to mislead the ML classifier and cause an Evasion Increase Rate (EIR) up to 73.2% using 40% less data than what a white-box attack needs to achieve a similar EIR.Comment: IEEE Global Communications Conference (Globecom), 2022, 6 pages, 2 Figures, 4 Table

    Secure Estimation in V2X Networks with Injection and Packet Drop Attacks

    Get PDF
    Vehicle-to-anything (V2X) communications are essential for facilitating cooperative intelligent transport system (C-ITS) components such as traffic safety and traffic efficiency applications. Integral to proper functioning of C-ITS systems is sensing and telemetery. To this end, this paper examines how to ensure security in sensing systems for V2X networks. In particular, secure remote estimation of a Gauss-Markov process based on measurements done by a set of vehicles is considered. The measurements are collected by the individual vehicles and are communicated via wireless links to the central fusion center. The system is attacked by malicious or compromised vehicles with the goal of increasing the estimation error. The attack is achieved by two mechanisms: false data injection (FDI) and garbage packet injection. This paper extends a previously proposed adaptive filtering algorithm for tackling FDI to accommodate both FDI and garbage packet injection, by filtering out malicious observations and thus enabling secure estimates. The efficacy of the proposed filter is demonstrated numerically

    A credibility score algorithm for malicious data detection in urban vehicular networks

    Get PDF
    This paper introduces a method to detect malicious data in urban vehicular networks, where vehicles report their locations to road-side units controlling traffic signals at intersections. The malicious data can be injected by a selfish vehicle approaching a signalized intersection to get the green light immediately. Another source of malicious data are vehicles with malfunctioning sensors. Detection of the malicious data is conducted using a traffic model based on cellular automata, which determines intervals representing possible positions of vehicles. A credibility score algorithm is introduced to decide if positions reported by particular vehicles are reliable and should be taken into account for controlling traffic signals. Extensive simulation experiments were conducted to verify effectiveness of the proposed approach in realistic scenarios. The experimental results show that the proposed method detects the malicious data with higher accuracy than compared state-of-the-art methods. The improved accuracy of detecting malicious data has enabled mitigation of their negative impact on the performance of traffic signal control

    Self-reliant misbehavior detection in V2X networks

    Full text link
    The safety and efficiency of vehicular communications rely on the correctness of the data exchanged between vehicles. Location spoofing is a proven and powerful attack against Vehicle-to-everything (V2X) communication systems that can cause traffic congestion and other safety hazards. Recent work also demonstrates practical spoofing attacks that can confuse intelligent transportation systems at road intersections. In this work, we propose two self-reliant schemes at the application layer and the physical layer to detect such misbehaviors. These schemes can be run independently by each vehicle and do not rely on the assumption that the majority of vehicles is honest. We first propose a scheme that uses application-layer plausibility checks as a feature vector for machine learning models. Our results show that this scheme improves the precision of the plausibility checks by over 20% by using them as feature vectors in KNN and SVM classifiers. We also show how to classify different types of known misbehaviors, once they are detected. We then propose three novel physical layer plausibility checks that leverage the received signal strength indicator (RSSI) of basic safety messages (BSMs). These plausibility checks have multi-step mechanisms to improve not only the detection rate, but also to decrease false positives. We comprehensively evaluate the performance of these plausibility checks using the VeReMi dataset (which we enhance along the way) for several types of attacks. We show that the best performing physical layer plausibility check among the three considered achieves an overall detection rate of 83.73% and a precision of 95.91%. The proposed application-layer and physical-layer plausibility checks provide a promising framework toward the deployment of on self-reliant misbehavior detection systems
    corecore