12,663 research outputs found
Efficient Neural Network Robustness Certification with General Activation Functions
Finding minimum distortion of adversarial examples and thus certifying
robustness in neural network classifiers for given data points is known to be a
challenging problem. Nevertheless, recently it has been shown to be possible to
give a non-trivial certified lower bound of minimum adversarial distortion, and
some recent progress has been made towards this direction by exploiting the
piece-wise linear nature of ReLU activations. However, a generic robustness
certification for general activation functions still remains largely
unexplored. To address this issue, in this paper we introduce CROWN, a general
framework to certify robustness of neural networks with general activation
functions for given input data points. The novelty in our algorithm consists of
bounding a given activation function with linear and quadratic functions, hence
allowing it to tackle general activation functions including but not limited to
four popular choices: ReLU, tanh, sigmoid and arctan. In addition, we
facilitate the search for a tighter certified lower bound by adaptively
selecting appropriate surrogates for each neuron activation. Experimental
results show that CROWN on ReLU networks can notably improve the certified
lower bounds compared to the current state-of-the-art algorithm Fast-Lin, while
having comparable computational efficiency. Furthermore, CROWN also
demonstrates its effectiveness and flexibility on networks with general
activation functions, including tanh, sigmoid and arctan.Comment: Accepted by NIPS 2018. Huan Zhang and Tsui-Wei Weng contributed
equall
Capacity Control of ReLU Neural Networks by Basis-path Norm
Recently, path norm was proposed as a new capacity measure for neural
networks with Rectified Linear Unit (ReLU) activation function, which takes the
rescaling-invariant property of ReLU into account. It has been shown that the
generalization error bound in terms of the path norm explains the empirical
generalization behaviors of the ReLU neural networks better than that of other
capacity measures. Moreover, optimization algorithms which take path norm as
the regularization term to the loss function, like Path-SGD, have been shown to
achieve better generalization performance. However, the path norm counts the
values of all paths, and hence the capacity measure based on path norm could be
improperly influenced by the dependency among different paths. It is also known
that each path of a ReLU network can be represented by a small group of
linearly independent basis paths with multiplication and division operation,
which indicates that the generalization behavior of the network only depends on
only a few basis paths. Motivated by this, we propose a new norm
\emph{Basis-path Norm} based on a group of linearly independent paths to
measure the capacity of neural networks more accurately. We establish a
generalization error bound based on this basis path norm, and show it explains
the generalization behaviors of ReLU networks more accurately than previous
capacity measures via extensive experiments. In addition, we develop
optimization algorithms which minimize the empirical risk regularized by the
basis-path norm. Our experiments on benchmark datasets demonstrate that the
proposed regularization method achieves clearly better performance on the test
set than the previous regularization approaches
Brain image clustering by wavelet energy and CBSSO optimization algorithm
Previously, the diagnosis of brain abnormality was significantly important in the saving of social and hospital resources. Wavelet energy is known as an effective feature detection which has great efficiency in different utilities. This paper suggests a new method based on wavelet energy to automatically classify magnetic resonance imaging (MRI) brain images into two groups (normal and abnormal), utilizing support vector machine (SVM) classification based on chaotic binary shark smell optimization (CBSSO) to optimize the SVM weights.
The results of the suggested CBSSO-based KSVM are compared favorably to several other methods in terms of better sensitivity and authenticity. The proposed CAD system can additionally be utilized to categorize the images with various pathological conditions, types, and illness modes
Medical imaging analysis with artificial neural networks
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
Verifying robustness of neural network classifiers has attracted great
interests and attention due to the success of deep neural networks and their
unexpected vulnerability to adversarial perturbations. Although finding minimum
adversarial distortion of neural networks (with ReLU activations) has been
shown to be an NP-complete problem, obtaining a non-trivial lower bound of
minimum distortion as a provable robustness guarantee is possible. However,
most previous works only focused on simple fully-connected layers (multilayer
perceptrons) and were limited to ReLU activations. This motivates us to propose
a general and efficient framework, CNN-Cert, that is capable of certifying
robustness on general convolutional neural networks. Our framework is general
-- we can handle various architectures including convolutional layers,
max-pooling layers, batch normalization layer, residual blocks, as well as
general activation functions; our approach is efficient -- by exploiting the
special structure of convolutional layers, we achieve up to 17 and 11 times of
speed-up compared to the state-of-the-art certification algorithms (e.g.
Fast-Lin, CROWN) and 366 times of speed-up compared to the dual-LP approach
while our algorithm obtains similar or even better verification bounds. In
addition, CNN-Cert generalizes state-of-the-art algorithms e.g. Fast-Lin and
CROWN. We demonstrate by extensive experiments that our method outperforms
state-of-the-art lower-bound-based certification algorithms in terms of both
bound quality and speed.Comment: Accepted by AAAI 201
A novel ANN fault diagnosis system for power systems using dual GA loops in ANN training
Fault diagnosis is of great importance to the rapid restoration of power systems. Many techniques have been employed to solve this problem. In this paper, a novel Genetic Algorithm (GA) based neural network for fault diagnosis in power systems is suggested, which adopts three-layer feed-forward neural network. Dual GA loops are applied in order to optimize the neural network topology and the connection weights. The first GA-loop is for structure optimization and the second one for connection weight optimization. Jointly they search the global optimal neural network solution for fault diagnosis. The formulation and the corresponding computer flow chart are presented in detail in the paper. Computer test results in a test power system indicate that the proposed GA-based neural network fault diagnosis system works well and is superior as compared with the conventional Back-Propagation (BP) neural network.published_or_final_versio
- …