4 research outputs found
Neural State Classification for Hybrid Systems
We introduce the State Classification Problem (SCP) for hybrid systems, and
present Neural State Classification (NSC) as an efficient solution technique.
SCP generalizes the model checking problem as it entails classifying each state
of a hybrid automaton as either positive or negative, depending on whether
or not satisfies a given time-bounded reachability specification. This is
an interesting problem in its own right, which NSC solves using
machine-learning techniques, Deep Neural Networks in particular. State
classifiers produced by NSC tend to be very efficient (run in constant time and
space), but may be subject to classification errors. To quantify and mitigate
such errors, our approach comprises: i) techniques for certifying, with
statistical guarantees, that an NSC classifier meets given accuracy levels; ii)
tuning techniques, including a novel technique based on adversarial sampling,
that can virtually eliminate false negatives (positive states classified as
negative), thereby making the classifier more conservative. We have applied NSC
to six nonlinear hybrid system benchmarks, achieving an accuracy of 99.25% to
99.98%, and a false-negative rate of 0.0033 to 0, which we further reduced to
0.0015 to 0 after tuning the classifier. We believe that this level of accuracy
is acceptable in many practical applications, and that these results
demonstrate the promise of the NSC approach.Comment: ATVA2018 extended versio
Learning Local Control Barrier Functions for Safety Control of Hybrid Systems
Hybrid dynamical systems are ubiquitous as practical robotic applications
often involve both continuous states and discrete switchings. Safety is a
primary concern for hybrid robotic systems. Existing safety-critical control
approaches for hybrid systems are either computationally inefficient,
detrimental to system performance, or limited to small-scale systems. To amend
these drawbacks, in this paper, we propose a learningenabled approach to
construct local Control Barrier Functions (CBFs) to guarantee the safety of a
wide class of nonlinear hybrid dynamical systems. The end result is a safe
neural CBFbased switching controller. Our approach is computationally
efficient, minimally invasive to any reference controller, and applicable to
large-scale systems. We empirically evaluate our framework and demonstrate its
efficacy and flexibility through two robotic examples including a
high-dimensional autonomous racing case, against other CBF-based approaches and
model predictive control
Conformal Quantitative Predictive Monitoring of STL Requirements for Stochastic Processes
We consider the problem of predictive monitoring (PM), i.e., predicting at
runtime the satisfaction of a desired property from the current system's state.
Due to its relevance for runtime safety assurance and online control, PM
methods need to be efficient to enable timely interventions against predicted
violations, while providing correctness guarantees. We introduce
\textit{quantitative predictive monitoring (QPM)}, the first PM method to
support stochastic processes and rich specifications given in Signal Temporal
Logic (STL). Unlike most of the existing PM techniques that predict whether or
not some property is satisfied, QPM provides a quantitative measure of
satisfaction by predicting the quantitative (aka robust) STL semantics of
. QPM derives prediction intervals that are highly efficient to compute
and with probabilistic guarantees, in that the intervals cover with arbitrary
probability the STL robustness values relative to the stochastic evolution of
the system. To do so, we take a machine-learning approach and leverage recent
advances in conformal inference for quantile regression, thereby avoiding
expensive Monte-Carlo simulations at runtime to estimate the intervals. We also
show how our monitors can be combined in a compositional manner to handle
composite formulas, without retraining the predictors nor sacrificing the
guarantees. We demonstrate the effectiveness and scalability of QPM over a
benchmark of four discrete-time stochastic processes with varying degrees of
complexity
Behavioral validation in Cyber-physical systems: Safety violations and beyond
The advances in software and hardware technologies in the last two decades have paved the way for the development of complex systems we observe around us. Avionics, automotive, power grid, medical devices, and robotics are a few examples of such systems which are usually termed as Cyber-physical systems (CPS) as they often involve both physical and software components. Deployment of a CPS in a safety critical application mandates that the system operates reliably even in adverse scenarios. While effective in improving confidence in system functionality, testing can not ascertain the absence of failures; whereas, formal verification can be exhaustive but it may not scale well as the system complexity grows. Simulation driven analysis tends to bridge this gap by tapping key system properties from the simulations. Despite their differences, all these analyses can be pivotal in providing system behaviors as the evidence to the satisfaction or violation of a given performance specification. However, less attention has been paid to algorithmically validating and characterizing different behaviors of a CPS. The focus of this thesis is on behavioral validation of Cyber-physical systems, which can supplement an existing CPS analysis framework. This thesis develops algorithmic tools for validating verification artifacts by generating a variety of counterexamples for a safety violation in a linear hybrid system. These counterexamples can serve as performance metrics to evaluate different controllers during design and testing phases. This thesis introduces the notion of complete characterization of a safety violation in a linear system with bounded inputs, and it proposes a sound technique to compute and efficiently represent these characterizations. This thesis further presents neural network based frameworks to perform systematic state space exploration guided by sensitivity or its gradient approximation in learning-enabled control (LEC) systems. The presented technique is accompanied with convergence guarantees and yields considerable performance gain over a widely used falsification platform for a class of signal temporal logic (STL) specifications.Doctor of Philosoph