14 research outputs found
Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering
Generative models for open domain question answering have proven to be
competitive, without resorting to external knowledge. While promising, this
approach requires to use models with billions of parameters, which are
expensive to train and query. In this paper, we investigate how much these
models can benefit from retrieving text passages, potentially containing
evidence. We obtain state-of-the-art results on the Natural Questions and
TriviaQA open benchmarks. Interestingly, we observe that the performance of
this method significantly improves when increasing the number of retrieved
passages. This is evidence that generative models are good at aggregating and
combining evidence from multiple passages
Distilling Knowledge from Reader to Retriever for Question Answering
The task of information retrieval is an important component of many natural
language processing systems, such as open domain question answering. While
traditional methods were based on hand-crafted features, continuous
representations based on neural networks recently obtained competitive results.
A challenge of using such methods is to obtain supervised data to train the
retriever model, corresponding to pairs of query and support documents. In this
paper, we propose a technique to learn retriever models for downstream tasks,
inspired by knowledge distillation, and which does not require annotated pairs
of query and documents. Our approach leverages attention scores of a reader
model, used to solve the task based on retrieved documents, to obtain synthetic
labels for the retriever. We evaluate our method on question answering,
obtaining state-of-the-art results
Autoregressive Entity Retrieval
Entities are at the center of how we represent and aggregate knowledge. For
instance, Encyclopedias such as Wikipedia are structured by entities (e.g., one
per Wikipedia article). The ability to retrieve such entities given a query is
fundamental for knowledge-intensive tasks such as entity linking and
open-domain question answering. Current approaches can be understood as
classifiers among atomic labels, one for each entity. Their weight vectors are
dense entity representations produced by encoding entity meta information such
as their descriptions. This approach has several shortcomings: (i) context and
entity affinity is mainly captured through a vector dot product, potentially
missing fine-grained interactions; (ii) a large memory footprint is needed to
store dense representations when considering large entity sets; (iii) an
appropriately hard set of negative data has to be subsampled at training time.
In this work, we propose GENRE, the first system that retrieves entities by
generating their unique names, left to right, token-by-token in an
autoregressive fashion. This mitigates the aforementioned technical issues
since: (i) the autoregressive formulation directly captures relations between
context and entity name, effectively cross encoding both; (ii) the memory
footprint is greatly reduced because the parameters of our encoder-decoder
architecture scale with vocabulary size, not entity count; (iii) the softmax
loss is computed without subsampling negative data. We experiment with more
than 20 datasets on entity disambiguation, end-to-end entity linking and
document retrieval tasks, achieving new state-of-the-art or very competitive
results while using a tiny fraction of the memory footprint of competing
systems. Finally, we demonstrate that new entities can be added by simply
specifying their names. Code and pre-trained models at
https://github.com/facebookresearch/GENRE.Comment: Accepted (spotlight) at International Conference on Learning
Representations (ICLR) 2021. Code at
https://github.com/facebookresearch/GENRE. 20 pages, 9 figures, 8 table
NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned
We review the EfficientQA competition from NeurIPS 2020. The competition focused on open-domain question answering (QA), where systems take natural language questions as input and return natural language answers. The aim of the competition was to build systems that can predict correct answers while also satisfying strict on-disk memory budgets. These memory budgets were designed to encourage contestants to explore the trade-off between storing retrieval corpora or the parameters of learned models. In this report, we describe the motivation and organization of the competition, review the best submissions, and analyze system predictions to inform a discussion of evaluation for open-domain QA
Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering
International audienceGenerative models for open domain question answering have proven to be competitive, without resorting to external knowledge. While promising, this approach requires to use models with billions of parameters, which are expensive to train and query. In this paper, we investigate how much these models can benefit from retrieving text passages, potentially containing evidence. We obtain state-of-the-art results on the Natural Questions and Triv-iaQA open benchmarks. Interestingly, we observe that the performance of this method significantly improves when increasing the number of retrieved passages. This is evidence that sequence-to-sequence models offers a flexible framework to efficiently aggregate and combine evidence from multiple passages
Distilling Knowledge from Reader to Retriever for Question Answering
International audienceThe task of information retrieval is an important component of many natural language processing systems, such as open domain question answering. While traditional methods were based on hand-crafted features, continuous representations based on neural networks recently obtained competitive results. A challenge of using such methods is to obtain supervised data to train the retriever model, corresponding to pairs of query and support documents. In this paper, we propose a technique to learn retriever models for downstream tasks, inspired by knowledge distillation, and which does not require annotated pairs of query and documents. Our approach leverages attention scores of a reader model, used to solve the task based on retrieved documents, to obtain synthetic labels for the retriever. We evaluate our method on question answering, obtaining state-of-the-art results