5,833 research outputs found
Deep Learning with S-shaped Rectified Linear Activation Units
Rectified linear activation units are important components for
state-of-the-art deep convolutional networks. In this paper, we propose a novel
S-shaped rectified linear activation unit (SReLU) to learn both convex and
non-convex functions, imitating the multiple function forms given by the two
fundamental laws, namely the Webner-Fechner law and the Stevens law, in
psychophysics and neural sciences. Specifically, SReLU consists of three
piecewise linear functions, which are formulated by four learnable parameters.
The SReLU is learned jointly with the training of the whole deep network
through back propagation. During the training phase, to initialize SReLU in
different layers, we propose a "freezing" method to degenerate SReLU into a
predefined leaky rectified linear unit in the initial several training epochs
and then adaptively learn the good initial values. SReLU can be universally
used in the existing deep networks with negligible additional parameters and
computation cost. Experiments with two popular CNN architectures, Network in
Network and GoogLeNet on scale-various benchmarks including CIFAR10, CIFAR100,
MNIST and ImageNet demonstrate that SReLU achieves remarkable improvement
compared to other activation functions.Comment: Accepted by AAAI-1
A survey on modern trainable activation functions
In neural networks literature, there is a strong interest in identifying and
defining activation functions which can improve neural network performance. In
recent years there has been a renovated interest of the scientific community in
investigating activation functions which can be trained during the learning
process, usually referred to as "trainable", "learnable" or "adaptable"
activation functions. They appear to lead to better network performance.
Diverse and heterogeneous models of trainable activation function have been
proposed in the literature. In this paper, we present a survey of these models.
Starting from a discussion on the use of the term "activation function" in
literature, we propose a taxonomy of trainable activation functions, highlight
common and distinctive proprieties of recent and past models, and discuss main
advantages and limitations of this type of approach. We show that many of the
proposed approaches are equivalent to adding neuron layers which use fixed
(non-trainable) activation functions and some simple local rule that
constraints the corresponding weight layers.Comment: Published in "Neural Networks" journal (Elsevier
Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks
In this paper we propose and investigate a novel nonlinear unit, called
unit, for deep neural networks. The proposed unit receives signals from
several projections of a subset of units in the layer below and computes a
normalized norm. We notice two interesting interpretations of the
unit. First, the proposed unit can be understood as a generalization of a
number of conventional pooling operators such as average, root-mean-square and
max pooling widely used in, for instance, convolutional neural networks (CNN),
HMAX models and neocognitrons. Furthermore, the unit is, to a certain
degree, similar to the recently proposed maxout unit (Goodfellow et al., 2013)
which achieved the state-of-the-art object recognition results on a number of
benchmark datasets. Secondly, we provide a geometrical interpretation of the
activation function based on which we argue that the unit is more
efficient at representing complex, nonlinear separating boundaries. Each
unit defines a superelliptic boundary, with its exact shape defined by the
order . We claim that this makes it possible to model arbitrarily shaped,
curved boundaries more efficiently by combining a few units of different
orders. This insight justifies the need for learning different orders for each
unit in the model. We empirically evaluate the proposed units on a number
of datasets and show that multilayer perceptrons (MLP) consisting of the
units achieve the state-of-the-art results on a number of benchmark datasets.
Furthermore, we evaluate the proposed unit on the recently proposed deep
recurrent neural networks (RNN).Comment: ECML/PKDD 201
- …