26 research outputs found
Short-Term Memory Optimization in Recurrent Neural Networks by Autoencoder-based Initialization
Training RNNs to learn long-term dependencies is difficult due to vanishing
gradients. We explore an alternative solution based on explicit memorization
using linear autoencoders for sequences, which allows to maximize the
short-term memory and that can be solved with a closed-form solution without
backpropagation. We introduce an initialization schema that pretrains the
weights of a recurrent neural network to approximate the linear autoencoder of
the input sequences and we show how such pretraining can better support solving
hard classification tasks with long sequences. We test our approach on
sequential and permuted MNIST. We show that the proposed approach achieves a
much lower reconstruction error for long sequences and a better gradient
propagation during the finetuning phase.Comment: Accepted at NeurIPS 2020 workshop "Beyond Backpropagation: Novel
Ideas for Training Neural Architectures
Complex Unitary Recurrent Neural Networks using Scaled Cayley Transform
Recurrent neural networks (RNNs) have been successfully used on a wide range
of sequential data problems. A well known difficulty in using RNNs is the
\textit{vanishing or exploding gradient} problem. Recently, there have been
several different RNN architectures that try to mitigate this issue by
maintaining an orthogonal or unitary recurrent weight matrix. One such
architecture is the scaled Cayley orthogonal recurrent neural network (scoRNN)
which parameterizes the orthogonal recurrent weight matrix through a scaled
Cayley transform. This parametrization contains a diagonal scaling matrix
consisting of positive or negative one entries that can not be optimized by
gradient descent. Thus the scaling matrix is fixed before training and a
hyperparameter is introduced to tune the matrix for each particular task. In
this paper, we develop a unitary RNN architecture based on a complex scaled
Cayley transform. Unlike the real orthogonal case, the transformation uses a
diagonal scaling matrix consisting of entries on the complex unit circle which
can be optimized using gradient descent and no longer requires the tuning of a
hyperparameter. We also provide an analysis of a potential issue of the modReLU
activiation function which is used in our work and several other unitary RNNs.
In the experiments conducted, the scaled Cayley unitary recurrent neural
network (scuRNN) achieves comparable or better results than scoRNN and other
unitary RNNs without fixing the scaling matrix