73 research outputs found
Counterexample-Preserving Reduction for Symbolic Model Checking
The cost of LTL model checking is highly sensitive to the length of the
formula under verification. We observe that, under some specific conditions,
the input LTL formula can be reduced to an easier-to-handle one before model
checking. In our reduction, these two formulae need not to be logically
equivalent, but they share the same counterexample set w.r.t the model. In the
case that the model is symbolically represented, the condition enabling such
reduction can be detected with a lightweight effort (e.g., with SAT-solving).
In this paper, we tentatively name such technique "Counterexample-Preserving
Reduction" (CePRe for short), and finally the proposed technquie is
experimentally evaluated by adapting NuSMV
On the Properties of Kullback-Leibler Divergence Between Multivariate Gaussian Distributions
Kullback-Leibler (KL) divergence is one of the most important divergence
measures between probability distributions. In this paper, we prove several
properties of KL divergence between multivariate Gaussian distributions. First,
for any two -dimensional Gaussian distributions and
, we give the supremum of
when . For
small , we show that the supremum is . This quantifies the approximate
symmetry of small KL divergence between Gaussians. We also find the infimum of
when . We give the conditions when the supremum and infimum can be
attained. Second, for any three -dimensional Gaussians ,
, and , we find an upper bound of
if and for
. For small and
, we show the upper bound is
.
This reveals that KL divergence between Gaussians follows a relaxed triangle
inequality. Importantly, all the bounds in the theorems presented in this paper
are independent of the dimension . Finally, We discuss the applications of
our theorems in explaining counterintuitive phenomenon of flow-based model,
deriving deep anomaly detection algorithm, and extending one-step robustness
guarantee to multiple steps in safe reinforcement learning.Comment: arXiv admin note: text overlap with arXiv:2002.0332
Verifying Safety of Neural Networks from Topological Perspectives
Neural networks (NNs) are increasingly applied in safety-critical systems
such as autonomous vehicles. However, they are fragile and are often
ill-behaved. Consequently, their behaviors should undergo rigorous guarantees
before deployment in practice. In this paper, we propose a set-boundary
reachability method to investigate the safety verification problem of NNs from
a topological perspective. Given an NN with an input set and a safe set, the
safety verification problem is to determine whether all outputs of the NN
resulting from the input set fall within the safe set. In our method, the
homeomorphism property and the open map property of NNs are mainly exploited,
which establish rigorous guarantees between the boundaries of the input set and
the boundaries of the output set. The exploitation of these two properties
facilitates reachability computations via extracting subsets of the input set
rather than the entire input set, thus controlling the wrapping effect in
reachability analysis and facilitating the reduction of computation burdens for
safety verification. The homeomorphism property exists in some widely used NNs
such as invertible residual networks (i-ResNets) and Neural ordinary
differential equations (Neural ODEs), and the open map is a less strict
property and easier to satisfy compared with the homeomorphism property. For
NNs establishing either of these properties, our set-boundary reachability
method only needs to perform reachability analysis on the boundary of the input
set. Moreover, for NNs that do not feature these properties with respect to the
input set, we explore subsets of the input set for establishing the local
homeomorphism property and then abandon these subsets for reachability
computations. Finally, some examples demonstrate the performance of the
proposed method.Comment: 25 pages, 11 figures. arXiv admin note: substantial text overlap with
arXiv:2210.0417
Safety Verification for Neural Networks Based on Set-boundary Analysis
Neural networks (NNs) are increasingly applied in safety-critical systems
such as autonomous vehicles. However, they are fragile and are often
ill-behaved. Consequently, their behaviors should undergo rigorous guarantees
before deployment in practice. In this paper we propose a set-boundary
reachability method to investigate the safety verification problem of NNs from
a topological perspective. Given an NN with an input set and a safe set, the
safety verification problem is to determine whether all outputs of the NN
resulting from the input set fall within the safe set. In our method, the
homeomorphism property of NNs is mainly exploited, which establishes a
relationship mapping boundaries to boundaries. The exploitation of this
property facilitates reachability computations via extracting subsets of the
input set rather than the entire input set, thus controlling the wrapping
effect in reachability analysis and facilitating the reduction of computation
burdens for safety verification. The homeomorphism property exists in some
widely used NNs such as invertible NNs. Notable representations are invertible
residual networks (i-ResNets) and Neural ordinary differential equations
(Neural ODEs). For these NNs, our set-boundary reachability method only needs
to perform reachability analysis on the boundary of the input set. For NNs
which do not feature this property with respect to the input set, we explore
subsets of the input set for establishing the local homeomorphism property, and
then abandon these subsets for reachability computations. Finally, some
examples demonstrate the performance of the proposed method.Comment: 19 pages, 7 figure
- …