147 research outputs found
Where to Apply Dropout in Recurrent Neural Networks for Handwriting Recognition?
Abstract-The dropout technique is a data-driven regularization method for neural networks. It consists in randomly setting some activations from a given hidden layer to zero during training. Repeating the procedure for each training example, it is equivalent to sample a network from an exponential number of architectures that share weights. The goal of dropout is to prevent feature detectors to rely on each other. Dropout has successfully been applied to Deep MLPs and to convolutional neural networks, for various tasks of Speech Recognition and Computer Vision. We recently proposed a way to use dropout in MDLSTM-RNNs for handwritten word and line recognition. In this paper, we show that further improvement can be achieved by implementing dropout differently, more specifically by applying it at better positions relative to the LSTM units
A Theoretically Grounded Application of Dropout in Recurrent Neural Networks
Recurrent neural networks (RNNs) stand at the forefront of many recent
developments in deep learning. Yet a major difficulty with these models is
their tendency to overfit, with dropout shown to fail when applied to recurrent
layers. Recent results at the intersection of Bayesian modelling and deep
learning offer a Bayesian interpretation of common deep learning techniques
such as dropout. This grounding of dropout in approximate Bayesian inference
suggests an extension of the theoretical results, offering insights into the
use of dropout with RNN models. We apply this new variational inference based
dropout technique in LSTM and GRU models, assessing it on language modelling
and sentiment analysis tasks. The new approach outperforms existing techniques,
and to the best of our knowledge improves on the single model state-of-the-art
in language modelling with the Penn Treebank (73.4 test perplexity). This
extends our arsenal of variational tools in deep learning.Comment: Added clarifications; Published in NIPS 201
Accelerating recurrent neural network training using sequence bucketing and multi-GPU data parallelization
An efficient algorithm for recurrent neural network training is presented.
The approach increases the training speed for tasks where a length of the input
sequence may vary significantly. The proposed approach is based on the optimal
batch bucketing by input sequence length and data parallelization on multiple
graphical processing units. The baseline training performance without sequence
bucketing is compared with the proposed solution for a different number of
buckets. An example is given for the online handwriting recognition task using
an LSTM recurrent neural network. The evaluation is performed in terms of the
wall clock time, number of epochs, and validation loss value.Comment: 4 pages, 5 figures, Comments, 2016 IEEE First International
Conference on Data Stream Mining & Processing (DSMP), Lviv, 201
- …