95 research outputs found
Robustness Verification of Support Vector Machines
We study the problem of formally verifying the robustness to adversarial
examples of support vector machines (SVMs), a major machine learning model for
classification and regression tasks. Following a recent stream of works on
formal robustness verification of (deep) neural networks, our approach relies
on a sound abstract version of a given SVM classifier to be used for checking
its robustness. This methodology is parametric on a given numerical abstraction
of real values and, analogously to the case of neural networks, needs neither
abstract least upper bounds nor widening operators on this abstraction. The
standard interval domain provides a simple instantiation of our abstraction
technique, which is enhanced with the domain of reduced affine forms, which is
an efficient abstraction of the zonotope abstract domain. This robustness
verification technique has been fully implemented and experimentally evaluated
on SVMs based on linear and nonlinear (polynomial and radial basis function)
kernels, which have been trained on the popular MNIST dataset of images and on
the recent and more challenging Fashion-MNIST dataset. The experimental results
of our prototype SVM robustness verifier appear to be encouraging: this
automated verification is fast, scalable and shows significantly high
percentages of provable robustness on the test set of MNIST, in particular
compared to the analogous provable robustness of neural networks
Robustness Analysis of Neural Networks via Efficient Partitioning with Applications in Control Systems
Neural networks (NNs) are now routinely implemented on systems that must
operate in uncertain environments, but the tools for formally analyzing how
this uncertainty propagates to NN outputs are not yet commonplace. Computing
tight bounds on NN output sets (given an input set) provides a measure of
confidence associated with the NN decisions and is essential to deploy NNs on
safety-critical systems. Recent works approximate the propagation of sets
through nonlinear activations or partition the uncertainty set to provide a
guaranteed outer bound on the set of possible NN outputs. However, the bound
looseness causes excessive conservatism and/or the computation is too slow for
online analysis. This paper unifies propagation and partition approaches to
provide a family of robustness analysis algorithms that give tighter bounds
than existing works for the same amount of computation time (or reduced
computational effort for a desired accuracy level). Moreover, we provide new
partitioning techniques that are aware of their current bound estimates and
desired boundary shape (e.g., lower bounds, weighted -ball, convex
hull), leading to further improvements in the computation-tightness tradeoff.
The paper demonstrates the tighter bounds and reduced conservatism of the
proposed robustness analysis framework with examples from model-free RL and
forward kinematics learning
NNV: The Neural Network Verification Tool for Deep Neural Networks and Learning-Enabled Cyber-Physical Systems
This paper presents the Neural Network Verification (NNV) software tool, a
set-based verification framework for deep neural networks (DNNs) and
learning-enabled cyber-physical systems (CPS). The crux of NNV is a collection
of reachability algorithms that make use of a variety of set representations,
such as polyhedra, star sets, zonotopes, and abstract-domain representations.
NNV supports both exact (sound and complete) and over-approximate (sound)
reachability algorithms for verifying safety and robustness properties of
feed-forward neural networks (FFNNs) with various activation functions. For
learning-enabled CPS, such as closed-loop control systems incorporating neural
networks, NNV provides exact and over-approximate reachability analysis schemes
for linear plant models and FFNN controllers with piecewise-linear activation
functions, such as ReLUs. For similar neural network control systems (NNCS)
that instead have nonlinear plant models, NNV supports over-approximate
analysis by combining the star set analysis used for FFNN controllers with
zonotope-based analysis for nonlinear plant dynamics building on CORA. We
evaluate NNV using two real-world case studies: the first is safety
verification of ACAS Xu networks and the second deals with the safety
verification of a deep learning-based adaptive cruise control system
- …