11,554 research outputs found
Answering Complex Questions by Joining Multi-Document Evidence with Quasi Knowledge Graphs
Direct answering of questions that involve multiple entities and relations is a challenge for text-based QA. This problem is most pronounced when answers can be found only by joining evidence from multiple documents. Curated knowledge graphs (KGs) may yield good answers, but are limited by their inherent incompleteness and potential staleness. This paper presents QUEST, a method that can answer complex questions directly from textual sources on-the-fly, by computing similarity joins over partial results from different documents. Our method is completely unsupervised, avoiding training-data bottlenecks and being able to cope with rapidly evolving ad hoc topics and formulation style in user questions. QUEST builds a noisy quasi KG with node and edge weights, consisting of dynamically retrieved entity names and relational phrases. It augments this graph with types and semantic alignments, and computes the best answers by an algorithm for Group Steiner Trees. We evaluate QUEST on benchmarks of complex questions, and show that it substantially outperforms state-of-the-art baselines
Dublin City University at QA@CLEF 2008
We describe our participation in Multilingual Question Answering at CLEF 2008 using German and English as our source and target languages respectively. The system was built using UIMA (Unstructured Information Management Architecture) as underlying framework
Cross-lingual Question Answering with QED
We present improvements and modifications of the QED open-domain question answering system developed for TREC-2003 to make it cross-lingual for participation in the CrossLinguistic Evaluation Forum (CLEF) Question Answering Track 2004 for the source languages French and German and the target language English. We use rule-based question translation extended with surface pattern-oriented pre- and post-processing rules for question reformulation to create and English query from its French or German original. Our system uses deep processing for the question and answers, which requires efficient and radical prior search space pruning. For answering factoid questions, we report an accuracy of 16% (German to English) and 20% (French to English), respectively
Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering
We propose an unsupervised strategy for the selection of justification
sentences for multi-hop question answering (QA) that (a) maximizes the
relevance of the selected sentences, (b) minimizes the overlap between the
selected facts, and (c) maximizes the coverage of both question and answer.
This unsupervised sentence selection method can be coupled with any supervised
QA approach. We show that the sentences selected by our method improve the
performance of a state-of-the-art supervised QA model on two multi-hop QA
datasets: AI2's Reasoning Challenge (ARC) and Multi-Sentence Reading
Comprehension (MultiRC). We obtain new state-of-the-art performance on both
datasets among approaches that do not use external resources for training the
QA system: 56.82% F1 on ARC (41.24% on Challenge and 64.49% on Easy) and 26.1%
EM0 on MultiRC. Our justification sentences have higher quality than the
justifications selected by a strong information retrieval baseline, e.g., by
5.4% F1 in MultiRC. We also show that our unsupervised selection of
justification sentences is more stable across domains than a state-of-the-art
supervised sentence selection method.Comment: Published at EMNLP-IJCNLP 2019 as long conference paper. Corrected
the name reference for Speer et.al, 201
Simple and Effective Multi-Paragraph Reading Comprehension
We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system.Comment: 11 pages, updated a referenc
- âŚ