40,489 research outputs found
Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives
This paper tackles the problem of reading comprehension over long narratives
where documents easily span over thousands of tokens. We propose a curriculum
learning (CL) based Pointer-Generator framework for reading/sampling over large
documents, enabling diverse training of the neural model based on the notion of
alternating contextual difficulty. This can be interpreted as a form of domain
randomization and/or generative pretraining during training. To this end, the
usage of the Pointer-Generator softens the requirement of having the answer
within the context, enabling us to construct diverse training samples for
learning. Additionally, we propose a new Introspective Alignment Layer (IAL),
which reasons over decomposed alignments using block-based self-attention. We
evaluate our proposed method on the NarrativeQA reading comprehension
benchmark, achieving state-of-the-art performance, improving existing baselines
by relative improvement on BLEU-4 and relative improvement on
Rouge-L. Extensive ablations confirm the effectiveness of our proposed IAL and
CL components.Comment: Accepted to ACL 201
A Fully Attention-Based Information Retriever
Recurrent neural networks are now the state-of-the-art in natural language
processing because they can build rich contextual representations and process
texts of arbitrary length. However, recent developments on attention mechanisms
have equipped feedforward networks with similar capabilities, hence enabling
faster computations due to the increase in the number of operations that can be
parallelized. We explore this new type of architecture in the domain of
question-answering and propose a novel approach that we call Fully Attention
Based Information Retriever (FABIR). We show that FABIR achieves competitive
results in the Stanford Question Answering Dataset (SQuAD) while having fewer
parameters and being faster at both learning and inference than rival methods.Comment: Accepted for presentation at the International Joint Conference on
Neural Networks (IJCNN) 201
- …