2,458 research outputs found
Branch and Bound for Piecewise Linear Neural Network Verification
The success of Deep Learning and its potential use in many safety-critical
applications has motivated research on formal verification of Neural Network
(NN) models. In this context, verification involves proving or disproving that
an NN model satisfies certain input-output properties. Despite the reputation
of learned NN models as black boxes, and the theoretical hardness of proving
useful properties about them, researchers have been successful in verifying
some classes of models by exploiting their piecewise linear structure and
taking insights from formal methods such as Satisifiability Modulo Theory.
However, these methods are still far from scaling to realistic neural networks.
To facilitate progress on this crucial area, we exploit the Mixed Integer
Linear Programming (MIP) formulation of verification to propose a family of
algorithms based on Branch-and-Bound (BaB). We show that our family contains
previous verification methods as special cases. With the help of the BaB
framework, we make three key contributions. Firstly, we identify new methods
that combine the strengths of multiple existing approaches, accomplishing
significant performance improvements over previous state of the art. Secondly,
we introduce an effective branching strategy on ReLU non-linearities. This
branching strategy allows us to efficiently and successfully deal with high
input dimensional problems with convolutional network architecture, on which
previous methods fail frequently. Finally, we propose comprehensive test data
sets and benchmarks which includes a collection of previously released
testcases. We use the data sets to conduct a thorough experimental comparison
of existing and new algorithms and to provide an inclusive analysis of the
factors impacting the hardness of verification problems
A Unified View of Piecewise Linear Neural Network Verification
The success of Deep Learning and its potential use in many safety-critical
applications has motivated research on formal verification of Neural Network
(NN) models. Despite the reputation of learned NN models to behave as black
boxes and the theoretical hardness of proving their properties, researchers
have been successful in verifying some classes of models by exploiting their
piecewise linear structure and taking insights from formal methods such as
Satisifiability Modulo Theory. These methods are however still far from scaling
to realistic neural networks. To facilitate progress on this crucial area, we
make two key contributions. First, we present a unified framework that
encompasses previous methods. This analysis results in the identification of
new methods that combine the strengths of multiple existing approaches,
accomplishing a speedup of two orders of magnitude compared to the previous
state of the art. Second, we propose a new data set of benchmarks which
includes a collection of previously released testcases. We use the benchmark to
provide the first experimental comparison of existing algorithms and identify
the factors impacting the hardness of verification problems.Comment: Updated version of "Piecewise Linear Neural Network verification: A
comparative study
Empirical Bounds on Linear Regions of Deep Rectifier Networks
We can compare the expressiveness of neural networks that use rectified
linear units (ReLUs) by the number of linear regions, which reflect the number
of pieces of the piecewise linear functions modeled by such networks. However,
enumerating these regions is prohibitive and the known analytical bounds are
identical for networks with same dimensions. In this work, we approximate the
number of linear regions through empirical bounds based on features of the
trained network and probabilistic inference. Our first contribution is a method
to sample the activation patterns defined by ReLUs using universal hash
functions. This method is based on a Mixed-Integer Linear Programming (MILP)
formulation of the network and an algorithm for probabilistic lower bounds of
MILP solution sets that we call MIPBound, which is considerably faster than
exact counting and reaches values in similar orders of magnitude. Our second
contribution is a tighter activation-based bound for the maximum number of
linear regions, which is particularly stronger in networks with narrow layers.
Combined, these bounds yield a fast proxy for the number of linear regions of a
deep neural network.Comment: AAAI 202
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Deep neural networks have emerged as a widely used and effective means for
tackling complex, real-world problems. However, a major obstacle in applying
them to safety-critical systems is the great difficulty in providing formal
guarantees about their behavior. We present a novel, scalable, and efficient
technique for verifying properties of deep neural networks (or providing
counter-examples). The technique is based on the simplex method, extended to
handle the non-convex Rectified Linear Unit (ReLU) activation function, which
is a crucial ingredient in many modern neural networks. The verification
procedure tackles neural networks as a whole, without making any simplifying
assumptions. We evaluated our technique on a prototype deep neural network
implementation of the next-generation airborne collision avoidance system for
unmanned aircraft (ACAS Xu). Results show that our technique can successfully
prove properties of networks that are an order of magnitude larger than the
largest networks verified using existing methods.Comment: This is the extended version of a paper with the same title that
appeared at CAV 201
- …