18 research outputs found
Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering
We propose an unsupervised strategy for the selection of justification
sentences for multi-hop question answering (QA) that (a) maximizes the
relevance of the selected sentences, (b) minimizes the overlap between the
selected facts, and (c) maximizes the coverage of both question and answer.
This unsupervised sentence selection method can be coupled with any supervised
QA approach. We show that the sentences selected by our method improve the
performance of a state-of-the-art supervised QA model on two multi-hop QA
datasets: AI2's Reasoning Challenge (ARC) and Multi-Sentence Reading
Comprehension (MultiRC). We obtain new state-of-the-art performance on both
datasets among approaches that do not use external resources for training the
QA system: 56.82% F1 on ARC (41.24% on Challenge and 64.49% on Easy) and 26.1%
EM0 on MultiRC. Our justification sentences have higher quality than the
justifications selected by a strong information retrieval baseline, e.g., by
5.4% F1 in MultiRC. We also show that our unsupervised selection of
justification sentences is more stable across domains than a state-of-the-art
supervised sentence selection method.Comment: Published at EMNLP-IJCNLP 2019 as long conference paper. Corrected
the name reference for Speer et.al, 201
Knowledge-enhanced Iterative Instruction Generation and Reasoning for Knowledge Base Question Answering
Multi-hop Knowledge Base Question Answering(KBQA) aims to find the answer
entity in a knowledge base which is several hops from the topic entity
mentioned in the question. Existing Retrieval-based approaches first generate
instructions from the question and then use them to guide the multi-hop
reasoning on the knowledge graph. As the instructions are fixed during the
whole reasoning procedure and the knowledge graph is not considered in
instruction generation, the model cannot revise its mistake once it predicts an
intermediate entity incorrectly. To handle this, we propose KBIGER(Knowledge
Base Iterative Instruction GEnerating and Reasoning), a novel and efficient
approach to generate the instructions dynamically with the help of reasoning
graph. Instead of generating all the instructions before reasoning, we take the
(k-1)-th reasoning graph into consideration to build the k-th instruction. In
this way, the model could check the prediction from the graph and generate new
instructions to revise the incorrect prediction of intermediate entities. We do
experiments on two multi-hop KBQA benchmarks and outperform the existing
approaches, becoming the new-state-of-the-art. Further experiments show our
method does detect the incorrect prediction of intermediate entities and has
the ability to revise such errors.Comment: Accepted by NLPCC 2022(oral
NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation
In this paper, we propose a Chinese multi-turn topic-driven conversation
dataset, NaturalConv, which allows the participants to chat anything they want
as long as any element from the topic is mentioned and the topic shift is
smooth. Our corpus contains 19.9K conversations from six domains, and 400K
utterances with an average turn number of 20.1. These conversations contain
in-depth discussions on related topics or widely natural transition between
multiple topics. We believe either way is normal for human conversation. To
facilitate the research on this corpus, we provide results of several benchmark
models. Comparative results show that for this dataset, our current models are
not able to provide significant improvement by introducing background
knowledge/topic. Therefore, the proposed dataset should be a good benchmark for
further research to evaluate the validity and naturalness of multi-turn
conversation systems. Our dataset is available at
https://ai.tencent.com/ailab/nlp/dialogue/#datasets.Comment: Accepted as a main track paper at AAAI 202
QAGCN: A Graph Convolutional Network-based Multi-Relation Question Answering System
Answering multi-relation questions over knowledge graphs is a challenging task as it requires multi-step reasoning over a huge number of possible paths. Reasoning-based methods with complex reasoning mechanisms, such as reinforcement learning-based sequential decision making, have been regarded as the default pathway for this task. However, these mechanisms are difficult to implement and train, which hampers their reproducibility and transferability to new domains. In this paper, we propose QAGCN - a simple but effective and novel model that leverages attentional graph convolutional networks that can perform multi-step reasoning during the encoding of knowledge graphs. As a consequence, complex reasoning mechanisms are avoided. In addition, to improve efficiency, we retrieve answers using highly-efficient embedding computations and, for better interpretability, we extract interpretable paths for returned answers. On widely adopted benchmark datasets, the proposed model has been demonstrated competitive against state-of-the-art methods that rely on complex reasoning mechanisms. We also conducted extensive experiments to scrutinize the efficiency and contribution of each component of our model