234 research outputs found
Modeling and performance evaluation of stealthy false data injection attacks on smart grid in the presence of corrupted measurements
The false data injection (FDI) attack cannot be detected by the traditional
anomaly detection techniques used in the energy system state estimators. In
this paper, we demonstrate how FDI attacks can be constructed blindly, i.e.,
without system knowledge, including topological connectivity and line reactance
information. Our analysis reveals that existing FDI attacks become detectable
(consequently unsuccessful) by the state estimator if the data contains grossly
corrupted measurements such as device malfunction and communication errors. The
proposed sparse optimization based stealthy attacks construction strategy
overcomes this limitation by separating the gross errors from the measurement
matrix. Extensive theoretical modeling and experimental evaluation show that
the proposed technique performs more stealthily (has less relative error) and
efficiently (fast enough to maintain time requirement) compared to other
methods on IEEE benchmark test systems.Comment: Keywords: Smart grid, False data injection, Blind attack, Principal
component analysis (PCA), Journal of Computer and System Sciences, Elsevier,
201
Distributed watermarking for secure control of microgrids under replay attacks
The problem of replay attacks in the communication network between
Distributed Generation Units (DGUs) of a DC microgrid is examined. The DGUs are
regulated through a hierarchical control architecture, and are networked to
achieve secondary control objectives. Following analysis of the detectability
of replay attacks by a distributed monitoring scheme previously proposed, the
need for a watermarking signal is identified. Hence, conditions are given on
the watermark in order to guarantee detection of replay attacks, and such a
signal is designed. Simulations are then presented to demonstrate the
effectiveness of the technique
On Ladder Logic Bombs in Industrial Control Systems
In industrial control systems, devices such as Programmable Logic Controllers
(PLCs) are commonly used to directly interact with sensors and actuators, and
perform local automatic control. PLCs run software on two different layers: a)
firmware (i.e. the OS) and b) control logic (processing sensor readings to
determine control actions). In this work, we discuss ladder logic bombs, i.e.
malware written in ladder logic (or one of the other IEC 61131-3-compatible
languages). Such malware would be inserted by an attacker into existing control
logic on a PLC, and either persistently change the behavior, or wait for
specific trigger signals to activate malicious behaviour. For example, the LLB
could replace legitimate sensor readings with manipulated values. We see the
concept of LLBs as a generalization of attacks such as the Stuxnet attack. We
introduce LLBs on an abstract level, and then demonstrate several designs based
on real PLC devices in our lab. In particular, we also focus on stealthy LLBs,
i.e. LLBs that are hard to detect by human operators manually validating the
program running in PLCs. In addition to introducing vulnerabilities on the
logic layer, we also discuss countermeasures and we propose two detection
techniques.Comment: 11 pages, 14 figures, 2 tables, 1 algorith
Cyber-physical security of a smart grid infrastructure
permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Carnegie Mellon University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.INVITE
Real-time Adaptive Sensor Attack Detection and Recovery in Autonomous Cyber-physical Systems
Cyber-Physical Systems (CPS) tightly couple information technology with physical processes, which rises new vulnerabilities such as physical attacks that are beyond conventional cyber attacks.Attackers may non-invasively compromise sensors and spoof the controller to perform unsafe actions. This issue is even emphasized with the increasing autonomy in CPS. While this fact has motivated many defense mechanisms against sensor attacks, a clear vision of the timing and usability (or the false alarm rate) of attack detection still remains elusive. Existing works tend to pursue an unachievable goal of minimizing the detection delay and false alarm rate at the same time, while there is a clear trade-off between the two metrics. Instead, this dissertation argues that attack detection should bias different metrics (detection delay and false alarm) when a system sits in different states. For example, if the system is close to unsafe states, reducing the detection delay is preferable to lowering the false alarm rate, and vice versa. This dissertation proposes two real-time adaptive sensor attack detection frameworks. The frameworks can dynamically adapt the detection delay and false alarm rate so as to meet a detection deadline and improve usability according to different system statuses. We design and implement the proposed frameworks and validate them using realistic sensor data of automotive CPS to demonstrate its efficiency and efficacy.
Further, this dissertation proposes \textit{Recovery-by-Learning}, a data-driven attack recovery framework that restores CPS from sensor attacks. The importance of attack recovery is emphasized by the need to mitigate the attack\u27s impact on a system and restore it to continue functioning. We propose a double sliding window-based checkpointing protocol to remove compromised data and keep trustful data for state estimation.
Together, the proposed solutions enable a holistic attack resilient solution for automotive cyber-physical systems
Replay-based Recovery for Autonomous Robotic Vehicles from Sensor Deception Attacks
Sensors are crucial for autonomous operation in robotic vehicles (RV).
Physical attacks on sensors such as sensor tampering or spoofing can feed
erroneous values to RVs through physical channels, which results in mission
failures. In this paper, we present DeLorean, a comprehensive diagnosis and
recovery framework for securing autonomous RVs from physical attacks. We
consider a strong form of physical attack called sensor deception attacks
(SDAs), in which the adversary targets multiple sensors of different types
simultaneously (even including all sensors). Under SDAs, DeLorean inspects the
attack induced errors, identifies the targeted sensors, and prevents the
erroneous sensor inputs from being used in RV's feedback control loop. DeLorean
replays historic state information in the feedback control loop and recovers
the RV from attacks. Our evaluation on four real and two simulated RVs shows
that DeLorean can recover RVs from different attacks, and ensure mission
success in 94% of the cases (on average), without any crashes. DeLorean incurs
low performance, memory and battery overheads
Tuning Windowed Chi-Squared Detectors for Sensor Attacks
A model-based windowed chi-squared procedure is proposed for identifying
falsified sensor measurements. We employ the widely-used static chi-squared and
the dynamic cumulative sum (CUSUM) fault/attack detection procedures as
benchmarks to compare the performance of the windowed chi-squared detector. In
particular, we characterize the state degradation that a class of attacks can
induce to the system while enforcing that the detectors do not raise alarms
(zero-alarm attacks). We quantify the advantage of using dynamic detectors
(windowed chi-squared and CUSUM detectors), which leverages the history of the
state, over a static detector (chi-squared) which uses a single measurement at
a time. Simulations using a chemical reactor are presented to illustrate the
performance of our tools
- …