694 research outputs found
A Classification-based Approach for Approximate Reachability
Hamilton-Jacobi (HJ) reachability analysis has been developed over the past
decades into a widely-applicable tool for determining goal satisfaction and
safety verification in nonlinear systems. While HJ reachability can be
formulated very generally, computational complexity can be a serious impediment
for many systems of practical interest. Much prior work has been devoted to
computing approximate solutions to large reachability problems, yet many of
these methods may only apply to very restrictive problem classes, do not
generate controllers, and/or can be extremely conservative. In this paper, we
present a new method for approximating the optimal controller of the HJ
reachability problem for control-affine systems. While also a specific problem
class, many dynamical systems of interest are, or can be well approximated, by
control-affine models. We explicitly avoid storing a representation of the
reachability value function, and instead learn a controller as a sequence of
simple binary classifiers. We compare our approach to existing grid-based
methodologies in HJ reachability and demonstrate its utility on several
examples, including a physical quadrotor navigation task
FaSTrack: a Modular Framework for Real-Time Motion Planning and Guaranteed Safe Tracking
Real-time, guaranteed safe trajectory planning is vital for navigation in
unknown environments. However, real-time navigation algorithms typically
sacrifice robustness for computation speed. Alternatively, provably safe
trajectory planning tends to be too computationally intensive for real-time
replanning. We propose FaSTrack, Fast and Safe Tracking, a framework that
achieves both real-time replanning and guaranteed safety. In this framework,
real-time computation is achieved by allowing any trajectory planner to use a
simplified \textit{planning model} of the system. The plan is tracked by the
system, represented by a more realistic, higher-dimensional \textit{tracking
model}. We precompute the tracking error bound (TEB) due to mismatch between
the two models and due to external disturbances. We also obtain the
corresponding tracking controller used to stay within the TEB. The
precomputation does not require prior knowledge of the environment. We
demonstrate FaSTrack using Hamilton-Jacobi reachability for precomputation and
three different real-time trajectory planners with three different
tracking-planning model pairs.Comment: Published in the IEEE Transactions on Automatic Contro
Recommended from our members
Game-Theoretic Safety Assurance for Human-Centered Robotic Systems
In order for autonomous systems like robots, drones, and self-driving cars to be reliably introduced into our society, they must have the ability to actively account for safety during their operation. While safety analysis has traditionally been conducted offline for controlled environments like cages on factory floors, the much higher complexity of open, human-populated spaces like our homes, cities, and roads makes it unviable to rely on common design-time assumptions, since these may be violated once the system is deployed. Instead, the next generation of robotic technologies will need to reason about safety online, constructing high-confidence assurances informed by ongoing observations of the environment and other agents, in spite of models of them being necessarily fallible.This dissertation aims to lay down the necessary foundations to enable autonomous systems to ensure their own safety in complex, changing, and uncertain environments, by explicitly reasoning about the gap between their models and the real world. It first introduces a suite of novel robust optimal control formulations and algorithmic tools that permit tractable safety analysis in time-varying, multi-agent systems, as well as safe real-time robotic navigation in partially unknown environments; these approaches are demonstrated on large-scale unmanned air traffic simulation and physical quadrotor platforms. After this, it draws on Bayesian machine learning methods to translate model-based guarantees into high-confidence assurances, monitoring the reliability of predictive models in light of changing evidence about the physical system and surrounding agents. This principle is first applied to a general safety framework allowing the use of learning-based control (e.g. reinforcement learning) for safety-critical robotic systems such as drones, and then combined with insights from cognitive science and dynamic game theory to enable safe human-centered navigation and interaction; these techniques are showcased on physical quadrotors—flying in unmodeled wind and among human pedestrians—and simulated highway driving. The dissertation ends with a discussion of challenges and opportunities ahead, including the bridging of safety analysis and reinforcement learning and the need to ``close the loop'' around learning and adaptation in order to deploy increasingly advanced autonomous systems with confidence
Deep Learning for Abstraction, Control and Monitoring of Complex Cyber-Physical Systems
Cyber-Physical Systems (CPS) consist of digital devices that interact with some physical components. Their popularity and complexity are growing exponentially, giving birth to new, previously unexplored, safety-critical application domains. As CPS permeate our daily lives, it becomes imperative
to reason about their reliability. Formal methods provide rigorous techniques for verification, control and synthesis of safe and reliable CPS. However, these methods do not scale with the complexity of the system, thus their applicability to real-world problems is limited. A promising strategy is to leverage deep learning techniques to tackle the scalability issue of formal methods, transforming unfeasible problems into approximately solvable ones. The approximate models are trained over observations which are solutions of the formal problem. In this thesis, we focus on the following tasks, which are computationally challenging: the modeling and the simulation of a complex stochastic model, the design of a safe and robust control policy for a system acting in a highly uncertain environment and the runtime verification problem under full or partial observability. Our approaches, based on deep
learning, are indeed applicable to real-world complex and safety-critical systems acting under strict real-time constraints and in presence of a significant
amount of uncertainty.Cyber-Physical Systems (CPS) consist of digital devices that interact with some physical components. Their popularity and complexity are growing exponentially, giving birth to new, previously unexplored, safety-critical application domains. As CPS permeate our daily lives, it becomes imperative
to reason about their reliability. Formal methods provide rigorous techniques for verification, control and synthesis of safe and reliable CPS. However, these methods do not scale with the complexity of the system, thus their applicability to real-world problems is limited. A promising strategy is to leverage deep learning techniques to tackle the scalability issue of formal methods, transforming unfeasible problems into approximately solvable ones. The approximate models are trained over observations which are solutions of the formal problem. In this thesis, we focus on the following tasks, which are computationally challenging: the modeling and the simulation of a complex stochastic model, the design of a safe and robust control policy for a system acting in a highly uncertain environment and the runtime verification problem under full or partial observability. Our approaches, based on deep
learning, are indeed applicable to real-world complex and safety-critical systems acting under strict real-time constraints and in presence of a significant
amount of uncertainty
- …