3,563 research outputs found
Efficient Neural Network Robustness Certification with General Activation Functions
Finding minimum distortion of adversarial examples and thus certifying
robustness in neural network classifiers for given data points is known to be a
challenging problem. Nevertheless, recently it has been shown to be possible to
give a non-trivial certified lower bound of minimum adversarial distortion, and
some recent progress has been made towards this direction by exploiting the
piece-wise linear nature of ReLU activations. However, a generic robustness
certification for general activation functions still remains largely
unexplored. To address this issue, in this paper we introduce CROWN, a general
framework to certify robustness of neural networks with general activation
functions for given input data points. The novelty in our algorithm consists of
bounding a given activation function with linear and quadratic functions, hence
allowing it to tackle general activation functions including but not limited to
four popular choices: ReLU, tanh, sigmoid and arctan. In addition, we
facilitate the search for a tighter certified lower bound by adaptively
selecting appropriate surrogates for each neuron activation. Experimental
results show that CROWN on ReLU networks can notably improve the certified
lower bounds compared to the current state-of-the-art algorithm Fast-Lin, while
having comparable computational efficiency. Furthermore, CROWN also
demonstrates its effectiveness and flexibility on networks with general
activation functions, including tanh, sigmoid and arctan.Comment: Accepted by NIPS 2018. Huan Zhang and Tsui-Wei Weng contributed
equall
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
Verifying robustness of neural network classifiers has attracted great
interests and attention due to the success of deep neural networks and their
unexpected vulnerability to adversarial perturbations. Although finding minimum
adversarial distortion of neural networks (with ReLU activations) has been
shown to be an NP-complete problem, obtaining a non-trivial lower bound of
minimum distortion as a provable robustness guarantee is possible. However,
most previous works only focused on simple fully-connected layers (multilayer
perceptrons) and were limited to ReLU activations. This motivates us to propose
a general and efficient framework, CNN-Cert, that is capable of certifying
robustness on general convolutional neural networks. Our framework is general
-- we can handle various architectures including convolutional layers,
max-pooling layers, batch normalization layer, residual blocks, as well as
general activation functions; our approach is efficient -- by exploiting the
special structure of convolutional layers, we achieve up to 17 and 11 times of
speed-up compared to the state-of-the-art certification algorithms (e.g.
Fast-Lin, CROWN) and 366 times of speed-up compared to the dual-LP approach
while our algorithm obtains similar or even better verification bounds. In
addition, CNN-Cert generalizes state-of-the-art algorithms e.g. Fast-Lin and
CROWN. We demonstrate by extensive experiments that our method outperforms
state-of-the-art lower-bound-based certification algorithms in terms of both
bound quality and speed.Comment: Accepted by AAAI 201
- …