223 research outputs found
Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural Network Controllers via Semidefinite Programming
There has been an increasing interest in using neural networks in closed-loop
control systems to improve performance and reduce computational costs for
on-line implementation. However, providing safety and stability guarantees for
these systems is challenging due to the nonlinear and compositional structure
of neural networks. In this paper, we propose a novel forward reachability
analysis method for the safety verification of linear time-varying systems with
neural networks in feedback interconnection. Our technical approach relies on
abstracting the nonlinear activation functions by quadratic constraints, which
leads to an outer-approximation of forward reachable sets of the closed-loop
system. We show that we can compute these approximate reachable sets using
semidefinite programming. We illustrate our method in a quadrotor example, in
which we first approximate a nonlinear model predictive controller via a deep
neural network and then apply our analysis tool to certify finite-time
reachability and constraint satisfaction of the closed-loop system
LSTM Neural Networks: Input to State Stability and Probabilistic Safety Verification
The goal of this paper is to analyze Long Short Term Memory (LSTM) neural
networks from a dynamical system perspective. The classical recursive equations
describing the evolution of LSTM can be recast in state space form, resulting
in a time-invariant nonlinear dynamical system. A sufficient condition
guaranteeing the Input-to-State (ISS) stability property of this class of
systems is provided. The ISS property entails the boundedness of the output
reachable set of the LSTM. In light of this result, a novel approach for the
safety verification of the network, based on the Scenario Approach, is devised.
The proposed method is eventually tested on a pH neutralization process.Comment: Accepted for Learning for dynamics & control (L4DC) 202
Encoding inductive invariants as barrier certificates: synthesis via difference-of-convex programming
A barrier certificate often serves as an inductive invariant that isolates an
unsafe region from the reachable set of states, and hence is widely used in
proving safety of hybrid systems possibly over an infinite time horizon. We
present a novel condition on barrier certificates, termed the invariant
barrier-certificate condition, that witnesses unbounded-time safety of
differential dynamical systems. The proposed condition is the weakest possible
one to attain inductive invariance. We show that discharging the invariant
barrier-certificate condition -- thereby synthesizing invariant barrier
certificates -- can be encoded as solving an optimization problem subject to
bilinear matrix inequalities (BMIs). We further propose a synthesis algorithm
based on difference-of-convex programming, which approaches a local optimum of
the BMI problem via solving a series of convex optimization problems. This
algorithm is incorporated in a branch-and-bound framework that searches for the
global optimum in a divide-and-conquer fashion. We present a weak completeness
result of our method, namely, a barrier certificate is guaranteed to be found
(under some mild assumptions) whenever there exists an inductive invariant (in
the form of a given template) that suffices to certify safety of the system.
Experimental results on benchmarks demonstrate the effectiveness and efficiency
of our approach.Comment: To be published in Inf. Comput. arXiv admin note: substantial text
overlap with arXiv:2105.1431
Data-Driven Assessment of Deep Neural Networks with Random Input Uncertainty
When using deep neural networks to operate safety-critical systems, assessing
the sensitivity of the network outputs when subject to uncertain inputs is of
paramount importance. Such assessment is commonly done using reachability
analysis or robustness certification. However, certification techniques
typically ignore localization information, while reachable set methods can fail
to issue robustness guarantees. Furthermore, many advanced methods are either
computationally intractable in practice or restricted to very specific models.
In this paper, we develop a data-driven optimization-based method capable of
simultaneously certifying the safety of network outputs and localizing them.
The proposed method provides a unified assessment framework, as it subsumes
state-of-the-art reachability analysis and robustness certification. The method
applies to deep neural networks of all sizes and structures, and to random
input uncertainty with a general distribution. We develop sufficient conditions
for the convexity of the underlying optimization, and for the number of data
samples to certify and localize the outputs with overwhelming probability. We
experimentally demonstrate the efficacy and tractability of the method on a
deep ReLU network
Reachability Analysis and Safety Verification of Neural Feedback Systems via Hybrid Zonotopes
Hybrid zonotopes generalize constrained zonotopes by introducing additional
binary variables and possess some unique properties that make them convenient
to represent nonconvex sets. This paper presents novel hybrid zonotope-based
methods for the reachability analysis and safety verification of neural
feedback systems. Algorithms are proposed to compute the input-output
relationship of each layer of a feedforward neural network, as well as the
exact reachable sets of neural feedback systems. In addition, a sufficient and
necessary condition is formulated as a mixed-integer linear program to certify
whether the trajectories of a neural feedback system can avoid unsafe regions.
The proposed approach is shown to yield a formulation that provides the
tightest convex relaxation for the reachable sets of the neural feedback
system. Complexity reduction techniques for the reachable sets are developed to
balance the computation efficiency and approximation accuracy. Two numerical
examples demonstrate the superior performance of the proposed approach compared
to other existing methods.Comment: 8 pages, 4 figure
- …