124 research outputs found
Adversarial Training and Provable Robustness: A Tale of Two Objectives
We propose a principled framework that combines adversarial training and
provable robustness verification for training certifiably robust neural
networks. We formulate the training problem as a joint optimization problem
with both empirical and provable robustness objectives and develop a novel
gradient-descent technique that can eliminate bias in stochastic
multi-gradients. We perform both theoretical analysis on the convergence of the
proposed technique and experimental comparison with state-of-the-arts. Results
on MNIST and CIFAR-10 show that our method can consistently match or outperform
prior approaches for provable l infinity robustness. Notably, we achieve 6.60%
verified test error on MNIST at epsilon = 0.3, and 66.57% on CIFAR-10 with
epsilon = 8/255.Comment: Accepted at AAAI 202
Recommended from our members
Efficient Neural Network Verification Using Branch and Bound
Neural networks have demonstrated great success in modern machine learning systems. However, they remain susceptible to incorrect corner-case behaviors, often behaving unpredictably and producing surprisingly wrong results. Therefore, it is desirable to formally guarantee their trustworthiness for certain robustness properties when applied to safety-/security-sensitive systems like autonomous vehicles and aircraft. Unfortunately, the task is extremely challenging due to the complexity of neural networks, and traditional formal methods were not efficient enough to verify practical properties. Recently, a Branch and Bound (BaB) framework is generally extended for neural network verification and shows great success in accelerating the verification.
This dissertation focuses on state-of-the-art neural network verifiers using BaB. We will first introduce two efficient neural network verifiers ReluVal and Neurify using basic BaB approaches involving two main steps: (1) They will recursively split the original verification problem into easier independent subproblems by splitting input or hidden neurons; (2) For each split subproblem, we propose an efficient and tight bound propagation method called symbolic interval analysis, producing sound estimated bounds for outputs using convex linear relaxations. Both ReluVal and Neurify are three orders of magnitude faster than previously state-of-the-art formal analysis systems on standard verification benchmarks.
However, basic BaB approaches like Neurify have to construct each subproblem into a Linear Programming (LP) problem and solve it using expensive LP solvers, significantly limiting the overall efficiency. This is because each step of BaB will introduce neuron split constraints (e.g., a ReLU neuron larger or smaller than 0), which are hard to be handled by existing efficient bound propagation methods. We propose novel designs of bound propagation method -CROWN and its improved variance -CROWN, solving the verification problem by optimizing Lagrangian multipliers and with gradient ascent without requiring to call any expensive LP solvers. They were built based on previous work CROWN, a generalized efficient bound propagation method using linear relaxation. BaB verification using -CROWN and -CROWN cannot only provide tighter output estimations than most of the bound propagation methods but also can fully leverage the accelerations by GPUs with massive parallelization.
Combining our methods with BaB empowers the state-of-the-art verifier ,-CROWN (alpha-beta-CROWN), the winning tool in the second International Verification of Neural Networks Competition (VNN-COMP 2021) with the highest total score. Our $\alpha,-CROWN can be three orders of magnitude faster than LP solver based BaB verifiers and is notably faster than all existing approaches on GPUs. Recently, we further generalize -CROWN and propose an efficient iterative approach that can tighten all intermediate layer bounds under neuron split constraints and strengthen the bound tightness without LP solvers. This new approach in BaB can greatly improve the efficiency of ,-CROWN, especially on several challenging benchmarks.
Lastly, we study verifiable training that incorporates verification properties in training procedures to enhance the verifiable robustness of trained models and scale verification to larger models and datasets. We propose two general verifiable training frameworks: (1) MixTrain that can significantly improve verifiable training efficiency and scalability and (2) adaptive verifiable training that can improve trained verifiable robustness accounting for label similarity. The combination of verifiable training and BaB based verifiers opens promising directions for more efficient and scalable neural network verification
Robustness Analysis of Neural Networks via Efficient Partitioning with Applications in Control Systems
Neural networks (NNs) are now routinely implemented on systems that must
operate in uncertain environments, but the tools for formally analyzing how
this uncertainty propagates to NN outputs are not yet commonplace. Computing
tight bounds on NN output sets (given an input set) provides a measure of
confidence associated with the NN decisions and is essential to deploy NNs on
safety-critical systems. Recent works approximate the propagation of sets
through nonlinear activations or partition the uncertainty set to provide a
guaranteed outer bound on the set of possible NN outputs. However, the bound
looseness causes excessive conservatism and/or the computation is too slow for
online analysis. This paper unifies propagation and partition approaches to
provide a family of robustness analysis algorithms that give tighter bounds
than existing works for the same amount of computation time (or reduced
computational effort for a desired accuracy level). Moreover, we provide new
partitioning techniques that are aware of their current bound estimates and
desired boundary shape (e.g., lower bounds, weighted -ball, convex
hull), leading to further improvements in the computation-tightness tradeoff.
The paper demonstrates the tighter bounds and reduced conservatism of the
proposed robustness analysis framework with examples from model-free RL and
forward kinematics learning
Certified Training: Small Boxes are All You Need
We propose the novel certified training method, SABR, which outperforms
existing methods across perturbation magnitudes on MNIST, CIFAR-10, and
TinyImageNet, in terms of both standard and certifiable accuracies. The key
insight behind SABR is that propagating interval bounds for a small but
carefully selected subset of the adversarial input region is sufficient to
approximate the worst-case loss over the whole region while significantly
reducing approximation errors. SABR does not only establish a new
state-of-the-art in all commonly used benchmarks but more importantly, points
to a new class of certified training methods promising to overcome the
robustness-accuracy trade-off
- …