7,078 research outputs found
A New Simulation Metric to Determine Safe Environments and Controllers for Systems with Unknown Dynamics
We consider the problem of extracting safe environments and controllers for
reach-avoid objectives for systems with known state and control spaces, but
unknown dynamics. In a given environment, a common approach is to synthesize a
controller from an abstraction or a model of the system (potentially learned
from data). However, in many situations, the relationship between the dynamics
of the model and the \textit{actual system} is not known; and hence it is
difficult to provide safety guarantees for the system. In such cases, the
Standard Simulation Metric (SSM), defined as the worst-case norm distance
between the model and the system output trajectories, can be used to modify a
reach-avoid specification for the system into a more stringent specification
for the abstraction. Nevertheless, the obtained distance, and hence the
modified specification, can be quite conservative. This limits the set of
environments for which a safe controller can be obtained. We propose SPEC, a
specification-centric simulation metric, which overcomes these limitations by
computing the distance using only the trajectories that violate the
specification for the system. We show that modifying a reach-avoid
specification with SPEC allows us to synthesize a safe controller for a larger
set of environments compared to SSM. We also propose a probabilistic method to
compute SPEC for a general class of systems. Case studies using simulators for
quadrotors and autonomous cars illustrate the advantages of the proposed metric
for determining safe environment sets and controllers.Comment: 22nd ACM International Conference on Hybrid Systems: Computation and
Control (2019
Recommended from our members
Oracle-Guided Design and Analysis of Learning-Based Cyber-Physical Systems
We are in world where autonomous systems, such as self-driving cars, surgical robots, robotic manipulators are becoming a reality. Such systems are considered \textit{safety-critical} since they interact with humans on a regular basis. Hence, before such systems can be integrated into our day to day life, we need to guarantee their safety. Recent success in machine learning (ML) and artificial intelligence (AI) has led to an increase in their use in real world robotic systems. For example, complex perception modules in self-driving cars and deep reinforcement learning controllers in robotic manipulators. Although powerful, they introduce an additional level of complexity when it comes to the formal analysis of autonomous systems. In this thesis, such systems are designated as Learning-Based Cyber-Physical Systems~(LB-CPS). In this thesis, we take inspiration from the Oracle-Guided Inductive Synthesis~(OGIS) paradigm to develop frameworks which can aid in achieving formal guarantees in different stages of an autonomous system design and analysis pipeline. Furthermore, we show that to guarantee the safety of LB-CPS, the design (synthesis) and analysis (verification) must consider feedback from the other. We consider five important parts of the design and analysis process and show a strong coupling among them, namely (i) Robust Control Synthesis from High Level Safety Specifications; (ii) Diagnosis and Repair of Safety Requirements for Control Synthesis; (iii) Counter-example Guided Data Augmentation for training high-accuracy ML models; (iv) Simulation-Guided Falsification and Verification against Adversarial Environments; and (v) Bridging Model and Real-World Gap. Finally, we introduce a software toolkit \verifai{} for the design and analysis of AI based systems, which was developed to provide a common formal platform to implement design and analysis frameworks for LB-CPS
Active Sampling-based Binary Verification of Dynamical Systems
Nonlinear, adaptive, or otherwise complex control techniques are increasingly
relied upon to ensure the safety of systems operating in uncertain
environments. However, the nonlinearity of the resulting closed-loop system
complicates verification that the system does in fact satisfy those
requirements at all possible operating conditions. While analytical proof-based
techniques and finite abstractions can be used to provably verify the
closed-loop system's response at different operating conditions, they often
produce conservative approximations due to restrictive assumptions and are
difficult to construct in many applications. In contrast, popular statistical
verification techniques relax the restrictions and instead rely upon
simulations to construct statistical or probabilistic guarantees. This work
presents a data-driven statistical verification procedure that instead
constructs statistical learning models from simulated training data to separate
the set of possible perturbations into "safe" and "unsafe" subsets. Binary
evaluations of closed-loop system requirement satisfaction at various
realizations of the uncertainties are obtained through temporal logic
robustness metrics, which are then used to construct predictive models of
requirement satisfaction over the full set of possible uncertainties. As the
accuracy of these predictive statistical models is inherently coupled to the
quality of the training data, an active learning algorithm selects additional
sample points in order to maximize the expected change in the data-driven model
and thus, indirectly, minimize the prediction error. Various case studies
demonstrate the closed-loop verification procedure and highlight improvements
in prediction error over both existing analytical and statistical verification
techniques.Comment: 23 page
- …