375 research outputs found
An Unknown Input Multi-Observer Approach for Estimation and Control under Adversarial Attacks
We address the problem of state estimation, attack isolation, and control of
discrete-time linear time-invariant systems under (potentially unbounded)
actuator and sensor false data injection attacks. Using a bank of unknown input
observers, each observer leading to an exponentially stable estimation error
(in the attack-free case), we propose an observer-based estimator that provides
exponential estimates of the system state in spite of actuator and sensor
attacks. Exploiting sensor and actuator redundancy, the estimation scheme is
guaranteed to work if a sufficiently small subset of sensors and actuators are
under attack. Using the proposed estimator, we provide tools for reconstructing
and isolating actuator and sensor attacks; and a control scheme capable of
stabilizing the closed-loop dynamics by switching off isolated actuators.
Simulation results are presented to illustrate the performance of our tools.Comment: arXiv admin note: substantial text overlap with arXiv:1811.1015
Distributed Fault Detection in Formation of Multi-Agent Systems with Attack Impact Analysis
Autonomous Underwater Vehicles (AUVs) are capable of performing a variety of deepwater marine applications as in multiple mobile robots and cooperative robot reconnaissance. Due to the environment that AUVs operate in, fault detection and isolation as well as the formation control of AUVs are more challenging than other Multi-Agent Systems (MASs). In this thesis, two main challenges are tackled.
We first investigate the formation control and fault accommodation algorithms for AUVs in presence of abnormal events such as faults and communication attacks in any of the team members. These undesirable events can prevent the entire team to achieve a safe,
reliable, and efficient performance while executing underwater mission tasks. For instance, AUVs may face unexpected actuator/sensor faults and the communication between AUVs
can be compromised, and consequently make the entire multi-agent system vulnerable to cyber-attacks. Moreover, a possible deception attack on network system may have a negative
impact on the environment and more importantly the national security. Furthermore, there are certain requirements for speed, position or depth of the AUV team. For this reason, we propose a distributed fault detection scheme that is able to detect and isolate faults in AUVs while maintaining their formation under security constraints. The effects of faults and communication attacks with a control theoretical perspective will be studied.
Another contribution of this thesis is to study a state estimation problem for a linear dynamical system in presence of a Bias Injection Attack (BIA). For this purpose, a Kalman Filter (KF) is used, where we show that the impact of an attack can be analyzed as the solution of a quadratically constrained problem for which the exact solution can be found efficiently. We also introduce a lower bound for the attack impact in terms of the number of compromised actuators and a combination of sensors and actuators. The theoretical findings are accompanied by simulation results and numerical can study examples
Securing Real-Time Internet-of-Things
Modern embedded and cyber-physical systems are ubiquitous. A large number of
critical cyber-physical systems have real-time requirements (e.g., avionics,
automobiles, power grids, manufacturing systems, industrial control systems,
etc.). Recent developments and new functionality requires real-time embedded
devices to be connected to the Internet. This gives rise to the real-time
Internet-of-things (RT-IoT) that promises a better user experience through
stronger connectivity and efficient use of next-generation embedded devices.
However RT- IoT are also increasingly becoming targets for cyber-attacks which
is exacerbated by this increased connectivity. This paper gives an introduction
to RT-IoT systems, an outlook of current approaches and possible research
challenges towards secure RT- IoT frameworks
Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities
Robotics and Artificial Intelligence (AI) have been inextricably intertwined
since their inception. Today, AI-Robotics systems have become an integral part
of our daily lives, from robotic vacuum cleaners to semi-autonomous cars. These
systems are built upon three fundamental architectural elements: perception,
navigation and planning, and control. However, while the integration of
AI-Robotics systems has enhanced the quality our lives, it has also presented a
serious problem - these systems are vulnerable to security attacks. The
physical components, algorithms, and data that make up AI-Robotics systems can
be exploited by malicious actors, potentially leading to dire consequences.
Motivated by the need to address the security concerns in AI-Robotics systems,
this paper presents a comprehensive survey and taxonomy across three
dimensions: attack surfaces, ethical and legal concerns, and Human-Robot
Interaction (HRI) security. Our goal is to provide users, developers and other
stakeholders with a holistic understanding of these areas to enhance the
overall AI-Robotics system security. We begin by surveying potential attack
surfaces and provide mitigating defensive strategies. We then delve into
ethical issues, such as dependency and psychological impact, as well as the
legal concerns regarding accountability for these systems. Besides, emerging
trends such as HRI are discussed, considering privacy, integrity, safety,
trustworthiness, and explainability concerns. Finally, we present our vision
for future research directions in this dynamic and promising field
- …