1,142 research outputs found
Empirical Bounds on Linear Regions of Deep Rectifier Networks
We can compare the expressiveness of neural networks that use rectified
linear units (ReLUs) by the number of linear regions, which reflect the number
of pieces of the piecewise linear functions modeled by such networks. However,
enumerating these regions is prohibitive and the known analytical bounds are
identical for networks with same dimensions. In this work, we approximate the
number of linear regions through empirical bounds based on features of the
trained network and probabilistic inference. Our first contribution is a method
to sample the activation patterns defined by ReLUs using universal hash
functions. This method is based on a Mixed-Integer Linear Programming (MILP)
formulation of the network and an algorithm for probabilistic lower bounds of
MILP solution sets that we call MIPBound, which is considerably faster than
exact counting and reaches values in similar orders of magnitude. Our second
contribution is a tighter activation-based bound for the maximum number of
linear regions, which is particularly stronger in networks with narrow layers.
Combined, these bounds yield a fast proxy for the number of linear regions of a
deep neural network.Comment: AAAI 202
Inherent Weight Normalization in Stochastic Neural Networks
Multiplicative stochasticity such as Dropout improves the robustness and
generalizability of deep neural networks. Here, we further demonstrate that
always-on multiplicative stochasticity combined with simple threshold neurons
are sufficient operations for deep neural networks. We call such models Neural
Sampling Machines (NSM). We find that the probability of activation of the NSM
exhibits a self-normalizing property that mirrors Weight Normalization, a
previously studied mechanism that fulfills many of the features of Batch
Normalization in an online fashion. The normalization of activities during
training speeds up convergence by preventing internal covariate shift caused by
changes in the input distribution. The always-on stochasticity of the NSM
confers the following advantages: the network is identical in the inference and
learning phases, making the NSM suitable for online learning, it can exploit
stochasticity inherent to a physical substrate such as analog non-volatile
memories for in-memory computing, and it is suitable for Monte Carlo sampling,
while requiring almost exclusively addition and comparison operations. We
demonstrate NSMs on standard classification benchmarks (MNIST and CIFAR) and
event-based classification benchmarks (N-MNIST and DVS Gestures). Our results
show that NSMs perform comparably or better than conventional artificial neural
networks with the same architecture
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide
range of tasks, with the best results obtained with large training sets and
large models. In the past, GPUs enabled these breakthroughs because of their
greater computational speed. In the future, faster computation at both training
and test time is likely to be crucial for further progress and for consumer
applications on low-power devices. As a result, there is much interest in
research and development of dedicated hardware for Deep Learning (DL). Binary
weights, i.e., weights which are constrained to only two possible values (e.g.
-1 or 1), would bring great benefits to specialized DL hardware by replacing
many multiply-accumulate operations by simple accumulations, as multipliers are
the most space and power-hungry components of the digital implementation of
neural networks. We introduce BinaryConnect, a method which consists in
training a DNN with binary weights during the forward and backward
propagations, while retaining precision of the stored weights in which
gradients are accumulated. Like other dropout schemes, we show that
BinaryConnect acts as regularizer and we obtain near state-of-the-art results
with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.Comment: Accepted at NIPS 2015, 9 pages, 3 figure
A survey on modern trainable activation functions
In neural networks literature, there is a strong interest in identifying and
defining activation functions which can improve neural network performance. In
recent years there has been a renovated interest of the scientific community in
investigating activation functions which can be trained during the learning
process, usually referred to as "trainable", "learnable" or "adaptable"
activation functions. They appear to lead to better network performance.
Diverse and heterogeneous models of trainable activation function have been
proposed in the literature. In this paper, we present a survey of these models.
Starting from a discussion on the use of the term "activation function" in
literature, we propose a taxonomy of trainable activation functions, highlight
common and distinctive proprieties of recent and past models, and discuss main
advantages and limitations of this type of approach. We show that many of the
proposed approaches are equivalent to adding neuron layers which use fixed
(non-trainable) activation functions and some simple local rule that
constraints the corresponding weight layers.Comment: Published in "Neural Networks" journal (Elsevier
Training Neural Networks with Stochastic Hessian-Free Optimization
Hessian-free (HF) optimization has been successfully used for training deep
autoencoders and recurrent networks. HF uses the conjugate gradient algorithm
to construct update directions through curvature-vector products that can be
computed on the same order of time as gradients. In this paper we exploit this
property and study stochastic HF with gradient and curvature mini-batches
independent of the dataset size. We modify Martens' HF for these settings and
integrate dropout, a method for preventing co-adaptation of feature detectors,
to guard against overfitting. Stochastic Hessian-free optimization gives an
intermediary between SGD and HF that achieves competitive performance on both
classification and deep autoencoder experiments.Comment: 11 pages, ICLR 201
Lightweight Probabilistic Deep Networks
Even though probabilistic treatments of neural networks have a long history,
they have not found widespread use in practice. Sampling approaches are often
too slow already for simple networks. The size of the inputs and the depth of
typical CNN architectures in computer vision only compound this problem.
Uncertainty in neural networks has thus been largely ignored in practice,
despite the fact that it may provide important information about the
reliability of predictions and the inner workings of the network. In this
paper, we introduce two lightweight approaches to making supervised learning
with probabilistic deep networks practical: First, we suggest probabilistic
output layers for classification and regression that require only minimal
changes to existing networks. Second, we employ assumed density filtering and
show that activation uncertainties can be propagated in a practical fashion
through the entire network, again with minor changes. Both probabilistic
networks retain the predictive power of the deterministic counterpart, but
yield uncertainties that correlate well with the empirical error induced by
their predictions. Moreover, the robustness to adversarial examples is
significantly increased.Comment: To appear at CVPR 201
- …