22,024 research outputs found
Learning Combinations of Activation Functions
In the last decade, an active area of research has been devoted to design
novel activation functions that are able to help deep neural networks to
converge, obtaining better performance. The training procedure of these
architectures usually involves optimization of the weights of their layers
only, while non-linearities are generally pre-specified and their (possible)
parameters are usually considered as hyper-parameters to be tuned manually. In
this paper, we introduce two approaches to automatically learn different
combinations of base activation functions (such as the identity function, ReLU,
and tanh) during the training phase. We present a thorough comparison of our
novel approaches with well-known architectures (such as LeNet-5, AlexNet, and
ResNet-56) on three standard datasets (Fashion-MNIST, CIFAR-10, and
ILSVRC-2012), showing substantial improvements in the overall performance, such
as an increase in the top-1 accuracy for AlexNet on ILSVRC-2012 of 3.01
percentage points.Comment: 6 pages, 3 figures. Published as a conference paper at ICPR 2018.
Code:
https://bitbucket.org/francux/learning_combinations_of_activation_function
DANTE: Deep AlterNations for Training nEural networks
We present DANTE, a novel method for training neural networks using the
alternating minimization principle. DANTE provides an alternate perspective to
traditional gradient-based backpropagation techniques commonly used to train
deep networks. It utilizes an adaptation of quasi-convexity to cast training a
neural network as a bi-quasi-convex optimization problem. We show that for
neural network configurations with both differentiable (e.g. sigmoid) and
non-differentiable (e.g. ReLU) activation functions, we can perform the
alternations effectively in this formulation. DANTE can also be extended to
networks with multiple hidden layers. In experiments on standard datasets,
neural networks trained using the proposed method were found to be promising
and competitive to traditional backpropagation techniques, both in terms of
quality of the solution, as well as training speed.Comment: 19 page
A survey on modern trainable activation functions
In neural networks literature, there is a strong interest in identifying and
defining activation functions which can improve neural network performance. In
recent years there has been a renovated interest of the scientific community in
investigating activation functions which can be trained during the learning
process, usually referred to as "trainable", "learnable" or "adaptable"
activation functions. They appear to lead to better network performance.
Diverse and heterogeneous models of trainable activation function have been
proposed in the literature. In this paper, we present a survey of these models.
Starting from a discussion on the use of the term "activation function" in
literature, we propose a taxonomy of trainable activation functions, highlight
common and distinctive proprieties of recent and past models, and discuss main
advantages and limitations of this type of approach. We show that many of the
proposed approaches are equivalent to adding neuron layers which use fixed
(non-trainable) activation functions and some simple local rule that
constraints the corresponding weight layers.Comment: Published in "Neural Networks" journal (Elsevier
- …