2,576 research outputs found

    Alibi framework for identifying reactive jamming nodes in wireless LAN

    Get PDF
    Reactive jamming nodes are the nodes of the network that get compromised and become the source of jamming attacks. They assume to know any shared secrets and protocols used in the networks. Thus, they can jam very effectively and are very stealthy. We propose a novel approach to identifying the reactive jamming nodes in wireless LAN (WLAN). We rely on the half-duplex nature of nodes: they cannot transmit and receive at the same time. Thus, if a compromised node jams a packet, it cannot guess the content of the jammed packet. More importantly, if an honest node receives a jammed packet, it can prove that it cannot be the one jamming the packet by showing the content of the packet. Such proofs of jammed packets are called "alibis" - the key concept of our approach. In this paper, we present an alibi framework to deal with reactive jamming nodes in WLAN. We propose a concept of alibi-safe topologies on which our proposed identification algorithms are proved to correctly identify the attackers. We further propose a realistic protocol to implement the identification algorithm. The protocol includes a BBC-based timing channel for information exchange under the jamming situation and a similarity hashing technique to reduce the storage and network overhead. The framework is evaluated in a realistic TOSSIM simulation where the simulation characteristics and parameters are based on real traces on our small-scale MICAz test-bed. The results show that in reasonable dense networks, the alibi framework can accurately identify both non-colluding and colluding reactive jamming nodes. Therefore, the alibi approach is a very promising approach to deal with reactive jamming nodes.published or submitted for publicationnot peer reviewe

    Dual Rate Control for Security in Cyber-physical Systems

    Full text link
    We consider malicious attacks on actuators and sensors of a feedback system which can be modeled as additive, possibly unbounded, disturbances at the digital (cyber) part of the feedback loop. We precisely characterize the role of the unstable poles and zeros of the system in the ability to detect stealthy attacks in the context of the sampled data implementation of the controller in feedback with the continuous (physical) plant. We show that, if there is a single sensor that is guaranteed to be secure and the plant is observable from that sensor, then there exist a class of multirate sampled data controllers that ensure that all attacks remain detectable. These dual rate controllers are sampling the output faster than the zero order hold rate that operates on the control input and as such, they can even provide better nominal performance than single rate, at the price of higher sampling of the continuous output

    Modeling and Detecting False Data Injection Attacks against Railway Traction Power Systems

    Get PDF
    Modern urban railways extensively use computerized sensing and control technologies to achieve safe, reliable, and well-timed operations. However, the use of these technologies may provide a convenient leverage to cyber-attackers who have bypassed the air gaps and aim at causing safety incidents and service disruptions. In this paper, we study false data injection (FDI) attacks against railways' traction power systems (TPSes). Specifically, we analyze two types of FDI attacks on the train-borne voltage, current, and position sensor measurements - which we call efficiency attack and safety attack -- that (i) maximize the system's total power consumption and (ii) mislead trains' local voltages to exceed given safety-critical thresholds, respectively. To counteract, we develop a global attack detection (GAD) system that serializes a bad data detector and a novel secondary attack detector designed based on unique TPS characteristics. With intact position data of trains, our detection system can effectively detect the FDI attacks on trains' voltage and current measurements even if the attacker has full and accurate knowledge of the TPS, attack detection, and real-time system state. In particular, the GAD system features an adaptive mechanism that ensures low false positive and negative rates in detecting the attacks under noisy system measurements. Extensive simulations driven by realistic running profiles of trains verify that a TPS setup is vulnerable to the FDI attacks, but these attacks can be detected effectively by the proposed GAD while ensuring a low false positive rate.Comment: IEEE/IFIP DSN-2016 and ACM Trans. on Cyber-Physical System

    Stuck in Traffic (SiT) Attacks: A Framework for Identifying Stealthy Attacks that Cause Traffic Congestion

    Full text link
    Recent advances in wireless technologies have enabled many new applications in Intelligent Transportation Systems (ITS) such as collision avoidance, cooperative driving, congestion avoidance, and traffic optimization. Due to the vulnerable nature of wireless communication against interference and intentional jamming, ITS face new challenges to ensure the reliability and the safety of the overall system. In this paper, we expose a class of stealthy attacks -- Stuck in Traffic (SiT) attacks -- that aim to cause congestion by exploiting how drivers make decisions based on smart traffic signs. An attacker mounting a SiT attack solves a Markov Decision Process problem to find optimal/suboptimal attack policies in which he/she interferes with a well-chosen subset of signals that are based on the state of the system. We apply Approximate Policy Iteration (API) algorithms to derive potent attack policies. We evaluate their performance on a number of systems and compare them to other attack policies including random, myopic and DoS attack policies. The generated policies, albeit suboptimal, are shown to significantly outperform other attack policies as they maximize the expected cumulative reward from the standpoint of the attacker

    Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating

    Full text link
    In this work, we investigate the concept of biometric backdoors: a template poisoning attack on biometric systems that allows adversaries to stealthily and effortlessly impersonate users in the long-term by exploiting the template update procedure. We show that such attacks can be carried out even by attackers with physical limitations (no digital access to the sensor) and zero knowledge of training data (they know neither decision boundaries nor user template). Based on the adversaries' own templates, they craft several intermediate samples that incrementally bridge the distance between their own template and the legitimate user's. As these adversarial samples are added to the template, the attacker is eventually accepted alongside the legitimate user. To avoid detection, we design the attack to minimize the number of rejected samples. We design our method to cope with the weak assumptions for the attacker and we evaluate the effectiveness of this approach on state-of-the-art face recognition pipelines based on deep neural networks. We find that in scenarios where the deep network is known, adversaries can successfully carry out the attack over 70% of cases with less than ten injection attempts. Even in black-box scenarios, we find that exploiting the transferability of adversarial samples from surrogate models can lead to successful attacks in around 15% of cases. Finally, we design a poisoning detection technique that leverages the consistent directionality of template updates in feature space to discriminate between legitimate and malicious updates. We evaluate such a countermeasure with a set of intra-user variability factors which may present the same directionality characteristics, obtaining equal error rates for the detection between 7-14% and leading to over 99% of attacks being detected after only two sample injections.Comment: 12 page
    • …
    corecore