5,893 research outputs found
A Theoretically Grounded Application of Dropout in Recurrent Neural Networks
Recurrent neural networks (RNNs) stand at the forefront of many recent
developments in deep learning. Yet a major difficulty with these models is
their tendency to overfit, with dropout shown to fail when applied to recurrent
layers. Recent results at the intersection of Bayesian modelling and deep
learning offer a Bayesian interpretation of common deep learning techniques
such as dropout. This grounding of dropout in approximate Bayesian inference
suggests an extension of the theoretical results, offering insights into the
use of dropout with RNN models. We apply this new variational inference based
dropout technique in LSTM and GRU models, assessing it on language modelling
and sentiment analysis tasks. The new approach outperforms existing techniques,
and to the best of our knowledge improves on the single model state-of-the-art
in language modelling with the Penn Treebank (73.4 test perplexity). This
extends our arsenal of variational tools in deep learning.Comment: Added clarifications; Published in NIPS 201
- …