2,192 research outputs found
A Framework for Worst-Case and Stochastic Safety Verification Using Barrier Certificates
This paper presents a methodology for safety verification of continuous and hybrid systems in the worst-case and stochastic settings. In the worst-case setting, a function of state termed barrier certificate is used to certify that all trajectories of the system starting from a given initial set do not enter an unsafe region. No explicit computation of reachable sets is required in the construction of barrier certificates, which makes it possible to handle nonlinearity, uncertainty, and constraints directly within this framework. In the stochastic setting, our method computes an upper bound on the probability that a trajectory of the system reaches the unsafe set, a bound whose validity is proven by the existence of a barrier certificate. For polynomial systems, barrier certificates can be constructed using convex optimization, and hence the method is computationally tractable. Some examples are provided to illustrate the use of the method
Reach Set Approximation through Decomposition with Low-dimensional Sets and High-dimensional Matrices
Approximating the set of reachable states of a dynamical system is an
algorithmic yet mathematically rigorous way to reason about its safety.
Although progress has been made in the development of efficient algorithms for
affine dynamical systems, available algorithms still lack scalability to ensure
their wide adoption in the industrial setting. While modern linear algebra
packages are efficient for matrices with tens of thousands of dimensions,
set-based image computations are limited to a few hundred. We propose to
decompose reach set computations such that set operations are performed in low
dimensions, while matrix operations like exponentiation are carried out in the
full dimension. Our method is applicable both in dense- and discrete-time
settings. For a set of standard benchmarks, it shows a speed-up of up to two
orders of magnitude compared to the respective state-of-the art tools, with
only modest losses in accuracy. For the dense-time case, we show an experiment
with more than 10.000 variables, roughly two orders of magnitude higher than
possible with previous approaches
- …