1 research outputs found
Sensitivity - Local Index to Control Chaoticity or Gradient Globally -
Here, we introduce a fully local index named "sensitivity" for each neuron to
control chaoticity or gradient globally in a neural network (NN). We also
propose a learning method to adjust it named "sensitivity adjustment learning
(SAL)". The index is the gradient magnitude of its output with respect to its
inputs. By adjusting its time average to 1.0 in each neuron, information
transmission in the neuron changes to be moderate without shrinking or
expanding for both forward and backward computations. That results in moderate
information transmission through a layer of neurons when the weights and inputs
are random. Therefore, SAL can control the chaoticity of the network dynamics
in a recurrent NN (RNN). It can also solve the vanishing gradient problem in
error backpropagation (BP) learning in a deep feedforward NN or an RNN. We
demonstrate that when applying SAL to an RNN with small and random initial
weights, log-sensitivity, which is the logarithm of RMS (root mean square)
sensitivity over all the neurons, is equivalent to the maximum Lyapunov
exponent until it reaches 0.0. We also show that SAL works with BP or BPTT (BP
through time) to avoid the vanishing gradient problem in a 300-layer NN or an
RNN that learns a problem with a lag of 300 steps between the first input and
the output. Compared with manually fine-tuning the spectral radius of the
weight matrix before learning, SAL's continuous nonlinear learning nature
prevents loss of sensitivities during learning, resulting in a significant
improvement in learning performance.Comment: 26 pages, 20 figure