1,819 research outputs found
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
Verifying robustness of neural network classifiers has attracted great
interests and attention due to the success of deep neural networks and their
unexpected vulnerability to adversarial perturbations. Although finding minimum
adversarial distortion of neural networks (with ReLU activations) has been
shown to be an NP-complete problem, obtaining a non-trivial lower bound of
minimum distortion as a provable robustness guarantee is possible. However,
most previous works only focused on simple fully-connected layers (multilayer
perceptrons) and were limited to ReLU activations. This motivates us to propose
a general and efficient framework, CNN-Cert, that is capable of certifying
robustness on general convolutional neural networks. Our framework is general
-- we can handle various architectures including convolutional layers,
max-pooling layers, batch normalization layer, residual blocks, as well as
general activation functions; our approach is efficient -- by exploiting the
special structure of convolutional layers, we achieve up to 17 and 11 times of
speed-up compared to the state-of-the-art certification algorithms (e.g.
Fast-Lin, CROWN) and 366 times of speed-up compared to the dual-LP approach
while our algorithm obtains similar or even better verification bounds. In
addition, CNN-Cert generalizes state-of-the-art algorithms e.g. Fast-Lin and
CROWN. We demonstrate by extensive experiments that our method outperforms
state-of-the-art lower-bound-based certification algorithms in terms of both
bound quality and speed.Comment: Accepted by AAAI 201
Efficient Neural Network Robustness Certification with General Activation Functions
Finding minimum distortion of adversarial examples and thus certifying
robustness in neural network classifiers for given data points is known to be a
challenging problem. Nevertheless, recently it has been shown to be possible to
give a non-trivial certified lower bound of minimum adversarial distortion, and
some recent progress has been made towards this direction by exploiting the
piece-wise linear nature of ReLU activations. However, a generic robustness
certification for general activation functions still remains largely
unexplored. To address this issue, in this paper we introduce CROWN, a general
framework to certify robustness of neural networks with general activation
functions for given input data points. The novelty in our algorithm consists of
bounding a given activation function with linear and quadratic functions, hence
allowing it to tackle general activation functions including but not limited to
four popular choices: ReLU, tanh, sigmoid and arctan. In addition, we
facilitate the search for a tighter certified lower bound by adaptively
selecting appropriate surrogates for each neuron activation. Experimental
results show that CROWN on ReLU networks can notably improve the certified
lower bounds compared to the current state-of-the-art algorithm Fast-Lin, while
having comparable computational efficiency. Furthermore, CROWN also
demonstrates its effectiveness and flexibility on networks with general
activation functions, including tanh, sigmoid and arctan.Comment: Accepted by NIPS 2018. Huan Zhang and Tsui-Wei Weng contributed
equall
Quantifying the Cost of Learning in Queueing Systems
Queueing systems are widely applicable stochastic models with use cases in
communication networks, healthcare, service systems, etc. Although their
optimal control has been extensively studied, most existing approaches assume
perfect knowledge of system parameters. Of course, this assumption rarely holds
in practice where there is parameter uncertainty, thus motivating a recent line
of work on bandit learning for queueing systems. This nascent stream of
research focuses on the asymptotic performance of the proposed algorithms.
In this paper, we argue that an asymptotic metric, which focuses on
late-stage performance, is insufficient to capture the intrinsic statistical
complexity of learning in queueing systems which typically occurs in the early
stage. Instead, we propose the Cost of Learning in Queueing (CLQ), a new metric
that quantifies the maximum increase in time-averaged queue length caused by
parameter uncertainty. We characterize the CLQ of a single-queue multi-server
system, and then extend these results to multi-queue multi-server systems and
networks of queues. In establishing our results, we propose a unified analysis
framework for CLQ that bridges Lyapunov and bandit analysis, which could be of
independent interest
Investigating the Discoloration of Leaves of Dioscorea polystachya Using Developed Atomic Absorption Spectrometry Methods for Manganese and Molybdenum
The Chinese yam (Dioscorea polystachya, DP) is promising for the food and pharmaceutical industries due to its nutritional value and pharmaceutical potential. Its proper cultivation is therefore of interest. An insufficient supply of minerals necessary for plant growth can be manifested by discoloration of the leaves. In our earlier study, magnesium deficiency was excluded as a cause. As a follow-up, this work focused on manganese and molybdenum. To quantify both minerals in leaf extracts of DP, analytical methods based on atomic absorption spectrometry (AAS) using the graphite furnace sub-technique were devised. The development revealed that the quantification of manganese works best without using any of the investigated modifiers. The optimized pyrolysis and atomization temperatures were 1300 °C and 1800 °C, respectively. For the analysis of molybdenum, calcium proved to be advantageous as a modifier. The optimum temperatures were 1900 °C and 2800 °C, respectively. Both methods showed satisfactory linearity for analysis. Thus, they were applied to quantify extracts from normal and discolored leaves of DP concerning the two minerals. It was found that discolored leaves had higher manganese levels and a lower molybdenum content. With these results, a potential explanation for the discoloration could be found
Development and Application of an Atomic Absorption Spectrometry-Based Method to Quantify Magnesium in Leaves of Dioscorea polystachya
The Chinese yam (Dioscorea polystachya, DP) is known for the nutritional value of its tuber. Nevertheless, DP also has promising pharmacological properties. Compared with the tuber, the leaves of DP are still very little studied. However, it may be possible to draw conclusions about the plant quality based on the coloration of the leaves. Magnesium, as a component of chlorophyll, seems to play a role. Therefore, the aim of this research work was to develop an atomic absorption spectrometry-based method for the analysis of magnesium (285.2125 nm) in leaf extracts of DP following the graphite furnace sub-technique. The optimization of the pyrolysis and atomization temperatures resulted in 1500 °C and 1800 °C, respectively. The general presence of flavonoids in the extracts was detected and could explain the high pyrolysis temperature due to the potential complexation of magnesium. The elaborated method had linearity in a range of 1–10 µg L−1 (R2 = 0.9975). The limits of detection and quantification amounted to 0.23 µg L−1 and 2.00 µg L−1, respectively. The characteristic mass was 0.027 pg, and the recovery was 96.7–102.0%. Finally, the method was applied to extracts prepared from differently colored leaves of DP. Similar magnesium contents were obtained for extracts made of dried and fresh leaves. It is often assumed that the yellowing of the leaves is associated with reduced magnesium content. However, the results indicated that yellow leaves are not due to lower magnesium levels. This stimulates the future analysis of DP leaves considering other essential minerals such as molybdenum or manganese
Towards Fast Computation of Certified Robustness for ReLU Networks
Verifying the robustness property of a general Rectified Linear Unit (ReLU)
network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer
CAV17]. Although finding the exact minimum adversarial distortion is hard,
giving a certified lower bound of the minimum distortion is possible. Current
available methods of computing such a bound are either time-consuming or
delivering low quality bounds that are too loose to be useful. In this paper,
we exploit the special structure of ReLU networks and provide two
computationally efficient algorithms Fast-Lin and Fast-Lip that are able to
certify non-trivial lower bounds of minimum distortions, by bounding the ReLU
units with appropriate linear functions Fast-Lin, or by bounding the local
Lipschitz constant Fast-Lip. Experiments show that (1) our proposed methods
deliver bounds close to (the gap is 2-3X) exact minimum distortion found by
Reluplex in small MNIST networks while our algorithms are more than 10,000
times faster; (2) our methods deliver similar quality of bounds (the gap is
within 35% and usually around 10%; sometimes our bounds are even better) for
larger networks compared to the methods based on solving linear programming
problems but our algorithms are 33-14,000 times faster; (3) our method is
capable of solving large MNIST and CIFAR networks up to 7 layers with more than
10,000 neurons within tens of seconds on a single CPU core.
In addition, we show that, in fact, there is no polynomial time algorithm
that can approximately find the minimum adversarial distortion of a
ReLU network with a approximation ratio unless
=, where is the number of neurons in the network.Comment: Tsui-Wei Weng and Huan Zhang contributed equall
- …