2 research outputs found
Understanding Roles and Entities: Datasets and Models for Natural Language Inference
We present two new datasets and a novel attention mechanism for Natural
Language Inference (NLI). Existing neural NLI models, even though when trained
on existing large datasets, do not capture the notion of entity and role well
and often end up making mistakes such as "Peter signed a deal" can be inferred
from "John signed a deal". The two datasets have been developed to mitigate
such issues and make the systems better at understanding the notion of
"entities" and "roles". After training the existing architectures on the new
dataset we observe that the existing architectures does not perform well on one
of the new benchmark. We then propose a modification to the "word-to-word"
attention function which has been uniformly reused across several popular NLI
architectures. The resulting architectures perform as well as their unmodified
counterparts on the existing benchmarks and perform significantly well on the
new benchmark for "roles" and "entities"
Learned in Translation: Contextualized Word Vectors
Computer vision has benefited from initializing multiple deep layers with
weights pretrained on large supervised training sets like ImageNet. Natural
language processing (NLP) typically sees initialization of only the lowest
layer of deep models with pretrained word vectors. In this paper, we use a deep
LSTM encoder from an attentional sequence-to-sequence model trained for machine
translation (MT) to contextualize word vectors. We show that adding these
context vectors (CoVe) improves performance over using only unsupervised word
and character vectors on a wide variety of common NLP tasks: sentiment analysis
(SST, IMDb), question classification (TREC), entailment (SNLI), and question
answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe
improves performance of our baseline models to the state of the art