138 research outputs found
BDD for Complete Characterization of a Safety Violation in Linear Systems with Inputs
The control design tools for linear systems typically involves pole placement
and computing Lyapunov functions which are useful for ensuring stability. But
given higher requirements on control design, a designer is expected to satisfy
other specification such as safety or temporal logic specification as well, and
a naive control design might not satisfy such specification. A control designer
can employ model checking as a tool for checking safety and obtain a
counterexample in case of a safety violation. While several scalable techniques
for verification have been developed for safety verification of linear
dynamical systems, such tools merely act as decision procedures to evaluate
system safety and, consequently, yield a counterexample as an evidence to
safety violation. However these model checking methods are not geared towards
discovering corner cases or re-using verification artifacts for another
sub-optimal safety specification. In this paper, we describe a technique for
obtaining complete characterization of counterexamples for a safety violation
in linear systems. The proposed technique uses the reachable set computed
during safety verification for a given temporal logic formula, performs
constraint propagation, and represents all modalities of counterexamples using
a binary decision diagram (BDD). We introduce an approach to dynamically
determine isomorphic nodes for obtaining a considerably reduced (in size)
decision diagram. A thorough experimental evaluation on various benchmarks
exhibits that the reduction technique achieves up to reduction in the
number of nodes and reduction in the width of the decision diagram.Comment: 16 pages, 5 figures, 2 table
Behavioral validation in Cyber-physical systems: Safety violations and beyond
The advances in software and hardware technologies in the last two decades have paved the way for the development of complex systems we observe around us. Avionics, automotive, power grid, medical devices, and robotics are a few examples of such systems which are usually termed as Cyber-physical systems (CPS) as they often involve both physical and software components. Deployment of a CPS in a safety critical application mandates that the system operates reliably even in adverse scenarios. While effective in improving confidence in system functionality, testing can not ascertain the absence of failures; whereas, formal verification can be exhaustive but it may not scale well as the system complexity grows. Simulation driven analysis tends to bridge this gap by tapping key system properties from the simulations. Despite their differences, all these analyses can be pivotal in providing system behaviors as the evidence to the satisfaction or violation of a given performance specification. However, less attention has been paid to algorithmically validating and characterizing different behaviors of a CPS. The focus of this thesis is on behavioral validation of Cyber-physical systems, which can supplement an existing CPS analysis framework. This thesis develops algorithmic tools for validating verification artifacts by generating a variety of counterexamples for a safety violation in a linear hybrid system. These counterexamples can serve as performance metrics to evaluate different controllers during design and testing phases. This thesis introduces the notion of complete characterization of a safety violation in a linear system with bounded inputs, and it proposes a sound technique to compute and efficiently represent these characterizations. This thesis further presents neural network based frameworks to perform systematic state space exploration guided by sensitivity or its gradient approximation in learning-enabled control (LEC) systems. The presented technique is accompanied with convergence guarantees and yields considerable performance gain over a widely used falsification platform for a class of signal temporal logic (STL) specifications.Doctor of Philosoph
Safe Robot Learning in Assistive Devices through Neural Network Repair
Assistive robotic devices are a particularly promising field of application
for neural networks (NN) due to the need for personalization and hard-to-model
human-machine interaction dynamics. However, NN based estimators and
controllers may produce potentially unsafe outputs over previously unseen data
points. In this paper, we introduce an algorithm for updating NN control
policies to satisfy a given set of formal safety constraints, while also
optimizing the original loss function. Given a set of mixed-integer linear
constraints, we define the NN repair problem as a Mixed Integer Quadratic
Program (MIQP). In extensive experiments, we demonstrate the efficacy of our
repair method in generating safe policies for a lower-leg prosthesis
Recommended from our members
Oracle-Guided Design and Analysis of Learning-Based Cyber-Physical Systems
We are in world where autonomous systems, such as self-driving cars, surgical robots, robotic manipulators are becoming a reality. Such systems are considered \textit{safety-critical} since they interact with humans on a regular basis. Hence, before such systems can be integrated into our day to day life, we need to guarantee their safety. Recent success in machine learning (ML) and artificial intelligence (AI) has led to an increase in their use in real world robotic systems. For example, complex perception modules in self-driving cars and deep reinforcement learning controllers in robotic manipulators. Although powerful, they introduce an additional level of complexity when it comes to the formal analysis of autonomous systems. In this thesis, such systems are designated as Learning-Based Cyber-Physical Systems~(LB-CPS). In this thesis, we take inspiration from the Oracle-Guided Inductive Synthesis~(OGIS) paradigm to develop frameworks which can aid in achieving formal guarantees in different stages of an autonomous system design and analysis pipeline. Furthermore, we show that to guarantee the safety of LB-CPS, the design (synthesis) and analysis (verification) must consider feedback from the other. We consider five important parts of the design and analysis process and show a strong coupling among them, namely (i) Robust Control Synthesis from High Level Safety Specifications; (ii) Diagnosis and Repair of Safety Requirements for Control Synthesis; (iii) Counter-example Guided Data Augmentation for training high-accuracy ML models; (iv) Simulation-Guided Falsification and Verification against Adversarial Environments; and (v) Bridging Model and Real-World Gap. Finally, we introduce a software toolkit \verifai{} for the design and analysis of AI based systems, which was developed to provide a common formal platform to implement design and analysis frameworks for LB-CPS
Model-based compositional verification approaches and tools development for cyber-physical systems
The model-based design for embedded real-time systems utilizes the veriable reusable components and proper architectures, to deal with the verification scalability problem caused by state-explosion. In this thesis, we address verification approaches for both low-level individual component correctness and high-level system correctness, which are equally important under this scheme. Three prototype tools are developed, implementing our approaches and algorithms accordingly.
For the component-level design-time verification, we developed a symbolic verifier, LhaVrf, for the reachability verification of concurrent linear hybrid systems (LHA). It is unique in translating a hybrid automaton into a transition system that preserves the discrete transition structure, possesses no continuous dynamics, and preserves reachability of discrete states. Afterward, model-checking is interleaved in the counterexample fragment based specification relaxation framework. We next present a simulation-based bounded-horizon reachability analysis approach for the reachability verification of systems modeled by hybrid automata (HA) on a run-time basis. This framework applies a dynamic, on-the-fly, repartition-based error propagation control method with the mild requirement of Lipschitz continuity on the continuous dynamics. The novel features allow state-triggered discrete jumps and provide eventually constant over-approximation error bound for incremental stable dynamics. The above approaches are implemented in our prototype verifier called HS3V. Once the component properties are established, the next thing is to establish the system-level properties through compositional verication. We present our work on the role and integration of quantier elimination (QE) for property composition and verication. In our approach, we derive in a single step, the strongest system property from the given component properties for both time-independent and time-dependent scenarios. The system initial condition can also be composed, which, alongside the strongest system property, are used to verify a postulated system property through induction. The above approaches are implemented in our prototype tool called ReLIC
IST Austria Thesis
Hybrid automata combine finite automata and dynamical systems, and model the interaction of digital with physical systems. Formal analysis that can guarantee the safety of all behaviors or rigorously witness failures, while unsolvable in general, has been tackled algorithmically using, e.g., abstraction, bounded model-checking, assisted theorem proving.
Nevertheless, very few methods have addressed the time-unbounded reachability analysis of hybrid automata and, for current sound and automatic tools, scalability remains critical. We develop methods for the polyhedral abstraction of hybrid automata, which construct coarse overapproximations and tightens them incrementally, in a CEGAR fashion. We use template polyhedra, i.e., polyhedra whose facets are normal to a given set of directions.
While, previously, directions were given by the user, we introduce (1) the first method
for computing template directions from spurious counterexamples, so as to generalize and
eliminate them. The method applies naturally to convex hybrid automata, i.e., hybrid
automata with (possibly non-linear) convex constraints on derivatives only, while for linear
ODE requires further abstraction. Specifically, we introduce (2) the conic abstractions,
which, partitioning the state space into appropriate (possibly non-uniform) cones, divide
curvy trajectories into relatively straight sections, suitable for polyhedral abstractions.
Finally, we introduce (3) space-time interpolation, which, combining interval arithmetic
and template refinement, computes appropriate (possibly non-uniform) time partitioning
and template directions along spurious trajectories, so as to eliminate them.
We obtain sound and automatic methods for the reachability analysis over dense
and unbounded time of convex hybrid automata and hybrid automata with linear ODE.
We build prototype tools and compare—favorably—our methods against the respective
state-of-the-art tools, on several benchmarks
Computer Aided Verification
This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book
- …