2,152 research outputs found
Context Generation Improves Open Domain Question Answering
Closed-book question answering (QA) requires a model to directly answer an
open-domain question without access to any external knowledge. Prior work on
closed-book QA either directly finetunes or prompts a pretrained language model
(LM) to leverage the stored knowledge. However, they do not fully exploit the
parameterized knowledge. To address this issue, we propose a two-stage,
closed-book QA framework which employs a coarse-to-fine approach to extract
relevant knowledge and answer a question. Our approach first generates a
related context for a given question by prompting a pretrained LM. We then
prompt the same LM for answer prediction using the generated context and the
question. Additionally, to eliminate failure caused by context uncertainty, we
marginalize over generated contexts. Experimental results on three QA
benchmarks show that our method significantly outperforms previous closed-book
QA methods (e.g. exact matching 68.6% vs. 55.3%), and is on par with open-book
methods that exploit external knowledge sources (e.g. 68.6% vs. 68.0%). Our
method is able to better exploit the stored knowledge in pretrained LMs without
adding extra learnable parameters or needing finetuning, and paves the way for
hybrid models that integrate pretrained LMs with external knowledge.Comment: 8 pages; Accepted at EACL202
Detrimental Contexts in Open-Domain Question Answering
For knowledge intensive NLP tasks, it has been widely accepted that accessing
more information is a contributing factor to improvements in the model's
end-to-end performance. However, counter-intuitively, too much context can have
a negative impact on the model when evaluated on common question answering (QA)
datasets. In this paper, we analyze how passages can have a detrimental effect
on retrieve-then-read architectures used in question answering. Our empirical
evidence indicates that the current read architecture does not fully leverage
the retrieved passages and significantly degrades its performance when using
the whole passages compared to utilizing subsets of them. Our findings
demonstrate that model accuracy can be improved by 10% on two popular QA
datasets by filtering out detrimental passages. Additionally, these outcomes
are attained by utilizing existing retrieval methods without further training
or data. We further highlight the challenges associated with identifying the
detrimental passages. First, even with the correct context, the model can make
an incorrect prediction, posing a challenge in determining which passages are
most influential. Second, evaluation typically considers lexical matching,
which is not robust to variations of correct answers. Despite these
limitations, our experimental results underscore the pivotal role of
identifying and removing these detrimental passages for the context-efficient
retrieve-then-read pipeline. Code and data are available at
https://github.com/xfactlab/emnlp2023-damaging-retrievalComment: Findings of EMNLP 202
- …