2,006 research outputs found
Secure State Estimation: Optimal Guarantees against Sensor Attacks in the Presence of Noise
Motivated by the need to secure cyber-physical systems against attacks, we
consider the problem of estimating the state of a noisy linear dynamical system
when a subset of sensors is arbitrarily corrupted by an adversary. We propose a
secure state estimation algorithm and derive (optimal) bounds on the achievable
state estimation error. In addition, as a result of independent interest, we
give a coding theoretic interpretation for prior work on secure state
estimation against sensor attacks in a noiseless dynamical system.Comment: A shorter version of this work will appear in the proceedings of ISIT
201
Active Perception in Adversarial Scenarios using Maximum Entropy Deep Reinforcement Learning
We pose an active perception problem where an autonomous agent actively
interacts with a second agent with potentially adversarial behaviors. Given the
uncertainty in the intent of the other agent, the objective is to collect
further evidence to help discriminate potential threats. The main technical
challenges are the partial observability of the agent intent, the adversary
modeling, and the corresponding uncertainty modeling. Note that an adversary
agent may act to mislead the autonomous agent by using a deceptive strategy
that is learned from past experiences. We propose an approach that combines
belief space planning, generative adversary modeling, and maximum entropy
reinforcement learning to obtain a stochastic belief space policy. By
accounting for various adversarial behaviors in the simulation framework and
minimizing the predictability of the autonomous agent's action, the resulting
policy is more robust to unmodeled adversarial strategies. This improved
robustness is empirically shown against an adversary that adapts to and
exploits the autonomous agent's policy when compared with a standard
Chance-Constraint Partially Observable Markov Decision Process robust approach
Recommended from our members
Secure state estimation against sensor attacks in the presence of noise
We consider the problem of estimating the state of a noisy linear dynamical system when an unknown subset of sensors is arbitrarily corrupted by an adversary. We propose a secure state estimation algorithm, and derive (optimal) bounds on the achievable state estimation error given an upper bound on the number of attacked sensors. The proposed state estimator involves Kalman filters operating over subsets of sensors to search for a sensor subset which is reliable for state estimation. To further improve the subset search time, we propose Satisfiability Modulo Theory-based techniques to exploit the combinatorial nature of searching over sensor subsets. Finally, as a result of independent interest, we give a coding theoretic view of attack detection and state estimation against sensor attacks in a noiseless dynamical system
A Satisfiability Modulo Theory Approach to Secure State Reconstruction in Differentially Flat Systems Under Sensor Attacks
We address the problem of estimating the state of a differentially flat
system from measurements that may be corrupted by an adversarial attack. In
cyber-physical systems, malicious attacks can directly compromise the system's
sensors or manipulate the communication between sensors and controllers. We
consider attacks that only corrupt a subset of sensor measurements. We show
that the possibility of reconstructing the state under such attacks is
characterized by a suitable generalization of the notion of s-sparse
observability, previously introduced by some of the authors in the linear case.
We also extend our previous work on the use of Satisfiability Modulo Theory
solvers to estimate the state under sensor attacks to the context of
differentially flat systems. The effectiveness of our approach is illustrated
on the problem of controlling a quadrotor under sensor attacks.Comment: arXiv admin note: text overlap with arXiv:1412.432
- …