2 research outputs found
Stabilizing Spiking Neuron Training
Stability arguments are often used to prevent learning algorithms from having
ever increasing activity and weights that hinder generalization. However,
stability conditions can clash with the sparsity required to augment the energy
efficiency of spiking neurons. Nonetheless it can also provide solutions. In
fact, spiking Neuromorphic Computing uses binary activity to improve Artificial
Intelligence energy efficiency. However, its non-smoothness requires
approximate gradients, known as Surrogate Gradients (SG), to close the
performance gap with Deep Learning. Several SG have been proposed in the
literature, but it remains unclear how to determine the best SG for a given
task and network. Thus, we aim at theoretically define the best SG, through
stability arguments, to reduce the need for grid search. In fact, we show that
more complex tasks and networks need more careful choice of SG, even if overall
the derivative of the fast sigmoid tends to outperform the other, for a wide
range of learning rates. We therefore design a stability based theoretical
method to choose initialization and SG shape before training on the most common
spiking neuron, the Leaky Integrate and Fire (LIF). Since our stability method
suggests the use of high firing rates at initialization, which is non-standard
in the neuromorphic literature, we show that high initial firing rates,
combined with a sparsity encouraging loss term introduced gradually, can lead
to better generalization, depending on the SG shape. Our stability based
theoretical solution, finds a SG and initialization that experimentally result
in improved accuracy. We show how it can be used to reduce the need of
extensive grid-search of dampening, sharpness and tail-fatness of the SG. We
also show that our stability concepts can be extended to be applicable on
different LIF variants, such as DECOLLE and fluctuations-driven
initializations
On the initialization of long short-term memory networks
Weight initialization is important for faster convergence and stability of
deep neural networks training. In this paper, a robust initialization method is
developed to address the training instability in long short-term memory (LSTM)
networks. It is based on a normalized random initialization of the network
weights that aims at preserving the variance of the network input and output in
the same range. The method is applied to standard LSTMs for univariate time
series regression and to LSTMs robust to missing values for multivariate
disease progression modeling. The results show that in all cases, the proposed
initialization method outperforms the state-of-the-art initialization
techniques in terms of training convergence and generalization performance of
the obtained solution