1,167 research outputs found
Fine-tuning Multi-hop Question Answering with Hierarchical Graph Network
In this paper, we present a two stage model for multi-hop question answering.
The first stage is a hierarchical graph network, which is used to reason over
multi-hop question and is capable to capture different levels of granularity
using the nature structure(i.e., paragraphs, questions, sentences and entities)
of documents. The reasoning process is convert to node classify task(i.e.,
paragraph nodes and sentences nodes). The second stage is a language model
fine-tuning task. In a word, stage one use graph neural network to select and
concatenate support sentences as one paragraph, and stage two find the answer
span in language model fine-tuning paradigm.Comment: the experience result is not as good as I excep
A Span-Extraction Dataset for Chinese Machine Reading Comprehension
Machine Reading Comprehension (MRC) has become enormously popular recently
and has attracted a lot of attention. However, the existing reading
comprehension datasets are mostly in English. In this paper, we introduce a
Span-Extraction dataset for Chinese machine reading comprehension to add
language diversities in this area. The dataset is composed by near 20,000 real
questions annotated on Wikipedia paragraphs by human experts. We also annotated
a challenge set which contains the questions that need comprehensive
understanding and multi-sentence inference throughout the context. We present
several baseline systems as well as anonymous submissions for demonstrating the
difficulties in this dataset. With the release of the dataset, we hosted the
Second Evaluation Workshop on Chinese Machine Reading Comprehension (CMRC
2018). We hope the release of the dataset could further accelerate the Chinese
machine reading comprehension research. Resources are available:
https://github.com/ymcui/cmrc2018Comment: 6 pages, accepted as a conference paper at EMNLP-IJCNLP 2019 (short
paper
XTQA: Span-Level Explanations of the Textbook Question Answering
Textbook Question Answering (TQA) is a task that one should answer a
diagram/non-diagram question given a large multi-modal context consisting of
abundant essays and diagrams. We argue that the explainability of this task
should place students as a key aspect to be considered. To address this issue,
we devise a novel architecture towards span-level eXplanations of the TQA
(XTQA) based on our proposed coarse-to-fine grained algorithm, which can
provide not only the answers but also the span-level evidences to choose them
for students. This algorithm first coarsely chooses top paragraphs relevant
to questions using the TF-IDF method, and then chooses top evidence spans
finely from all candidate spans within these paragraphs by computing the
information gain of each span to questions. Experimental results shows that
XTQA significantly improves the state-of-the-art performance compared with
baselines. The source code is available at
https://github.com/keep-smile-001/opentqaComment: 10 page
Tri-Attention: Explicit Context-Aware Attention Mechanism for Natural Language Processing
In natural language processing (NLP), the context of a word or sentence plays
an essential role. Contextual information such as the semantic representation
of a passage or historical dialogue forms an essential part of a conversation
and a precise understanding of the present phrase or sentence. However, the
standard attention mechanisms typically generate weights using query and key
but ignore context, forming a Bi-Attention framework, despite their great
success in modeling sequence alignment. This Bi-Attention mechanism does not
explicitly model the interactions between the contexts, queries and keys of
target sequences, missing important contextual information and resulting in
poor attention performance. Accordingly, a novel and general triple-attention
(Tri-Attention) framework expands the standard Bi-Attention mechanism and
explicitly interacts query, key, and context by incorporating context as the
third dimension in calculating relevance scores. Four variants of Tri-Attention
are generated by expanding the two-dimensional vector-based additive,
dot-product, scaled dot-product, and bilinear operations in Bi-Attention to the
tensor operations for Tri-Attention. Extensive experiments on three NLP tasks
demonstrate that Tri-Attention outperforms about 30 state-of-the-art
non-attention, standard Bi-Attention, contextual Bi-Attention approaches and
pretrained neural language models1
- …