13,271 research outputs found
A survey on modern trainable activation functions
In neural networks literature, there is a strong interest in identifying and
defining activation functions which can improve neural network performance. In
recent years there has been a renovated interest of the scientific community in
investigating activation functions which can be trained during the learning
process, usually referred to as "trainable", "learnable" or "adaptable"
activation functions. They appear to lead to better network performance.
Diverse and heterogeneous models of trainable activation function have been
proposed in the literature. In this paper, we present a survey of these models.
Starting from a discussion on the use of the term "activation function" in
literature, we propose a taxonomy of trainable activation functions, highlight
common and distinctive proprieties of recent and past models, and discuss main
advantages and limitations of this type of approach. We show that many of the
proposed approaches are equivalent to adding neuron layers which use fixed
(non-trainable) activation functions and some simple local rule that
constraints the corresponding weight layers.Comment: Published in "Neural Networks" journal (Elsevier
Bayesian optimization for sparse neural networks with trainable activation functions
In the literature on deep neural networks, there is considerable interest in
developing activation functions that can enhance neural network performance. In
recent years, there has been renewed scientific interest in proposing
activation functions that can be trained throughout the learning process, as
they appear to improve network performance, especially by reducing overfitting.
In this paper, we propose a trainable activation function whose parameters need
to be estimated. A fully Bayesian model is developed to automatically estimate
from the learning data both the model weights and activation function
parameters. An MCMC-based optimization scheme is developed to build the
inference. The proposed method aims to solve the aforementioned problems and
improve convergence time by using an efficient sampling scheme that guarantees
convergence to the global maximum. The proposed scheme is tested on three
datasets with three different CNNs. Promising results demonstrate the
usefulness of our proposed approach in improving model accuracy due to the
proposed activation function and Bayesian estimation of the parameters
- …