287 research outputs found
Sequential Attention: A Context-Aware Alignment Function for Machine Reading
In this paper we propose a neural network model with a novel Sequential
Attention layer that extends soft attention by assigning weights to words in an
input sequence in a way that takes into account not just how well that word
matches a query, but how well surrounding words match. We evaluate this
approach on the task of reading comprehension (on the Who did What and CNN
datasets) and show that it dramatically improves a strong baseline--the
Stanford Reader--and is competitive with the state of the art.Comment: To appear in ACL 2017 2nd Workshop on Representation Learning for
NLP. Contains additional experiments in section 4 and a revised Figure
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation.Comment: 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL201
- …