18 research outputs found
An exploration of dropout with RNNs for natural language inference.
Dropout is a crucial regularization technique for the Recurrent Neural Network (RNN) models of Natural Language Inference (NLI). However, dropout has not been evaluated for the effectiveness at different layers and dropout rates in NLI models. In this paper, we propose a novel RNN model for NLI and empirically evaluate the effect of applying dropout at different layers in the model. We also investigate the impact of varying dropout rates at these layers. Our empirical evaluation on a large (Stanford Natural Language Inference (SNLI)) and a small (SciTail) dataset suggest that dropout at each feed-forward connection severely affects the model accuracy at increasing dropout rates. We also show that regularizing the embedding layer is efficient for SNLI whereas regularizing the recurrent layer improves the accuracy for SciTail. Our model achieved an accuracy 86.14% on the SNLI dataset and 77.05% on SciTail
Neural Network Parameterizations of Electromagnetic Nucleon Form Factors
The electromagnetic nucleon form-factors data are studied with artificial
feed forward neural networks. As a result the unbiased model-independent
form-factor parametrizations are evaluated together with uncertainties. The
Bayesian approach for the neural networks is adapted for chi2 error-like
function and applied to the data analysis. The sequence of the feed forward
neural networks with one hidden layer of units is considered. The given neural
network represents a particular form-factor parametrization. The so-called
evidence (the measure of how much the data favor given statistical model) is
computed with the Bayesian framework and it is used to determine the best form
factor parametrization.Comment: The revised version is divided into 4 sections. The discussion of the
prior assumptions is added. The manuscript contains 4 new figures and 2 new
tables (32 pages, 15 figures, 2 tables
An Exploration of Dropout with RNNs for Natural Language Inference
Dropout is a crucial regularization technique for the Recurrent Neural Network (RNN) models of Natural Language Inference (NLI). However, dropout has not been evaluated for the effectiveness at different layers and dropout rates in NLI models. In this paper, we propose a novel RNN model for NLI and empirically evaluate the effect of applying dropout at different layers in the model. We also investigate the impact of varying dropout rates at these layers. Our empirical evaluation on a large (Stanford Natural Language Inference (SNLI)) and a small (SciTail) dataset suggest that dropout at each feed-forward connection severely affects the model accuracy at increasing dropout rates. We also show that regularizing the embedding layer is efficient for SNLI whereas regularizing the recurrent layer improves the accuracy for SciTail. Our model achieved an accuracy 86.14% on the SNLI dataset and 77.05% on SciTail