13 research outputs found
Incorporating Relation Knowledge into Commonsense Reading Comprehension with Multi-task Learning
This paper focuses on how to take advantage of external relational knowledge
to improve machine reading comprehension (MRC) with multi-task learning. Most
of the traditional methods in MRC assume that the knowledge used to get the
correct answer generally exists in the given documents. However, in real-world
task, part of knowledge may not be mentioned and machines should be equipped
with the ability to leverage external knowledge. In this paper, we integrate
relational knowledge into MRC model for commonsense reasoning. Specifically,
based on a pre-trained language model (LM). We design two auxiliary
relation-aware tasks to predict if there exists any commonsense relation and
what is the relation type between two words, in order to better model the
interactions between document and candidate answer option. We conduct
experiments on two multi-choice benchmark datasets: the SemEval-2018 Task 11
and the Cloze Story Test. The experimental results demonstrate the
effectiveness of the proposed method, which achieves superior performance
compared with the comparable baselines on both datasets.Comment: Accepted at CIKM'19, 4 page
Multi-Perspective Fusion Network for Commonsense Reading Comprehension
Commonsense Reading Comprehension (CRC) is a significantly challenging task,
aiming at choosing the right answer for the question referring to a narrative
passage, which may require commonsense knowledge inference. Most of the
existing approaches only fuse the interaction information of choice, passage,
and question in a simple combination manner from a \emph{union} perspective,
which lacks the comparison information on a deeper level. Instead, we propose a
Multi-Perspective Fusion Network (MPFN), extending the single fusion method
with multiple perspectives by introducing the \emph{difference} and
\emph{similarity} fusion\deleted{along with the \emph{union}}. More
comprehensive and accurate information can be captured through the three types
of fusion. We design several groups of experiments on MCScript dataset
\cite{Ostermann:LREC18:MCScript} to evaluate the effectiveness of the three
types of fusion respectively. From the experimental results, we can conclude
that the difference fusion is comparable with union fusion, and the similarity
fusion needs to be activated by the union fusion. The experimental result also
shows that our MPFN model achieves the state-of-the-art with an accuracy of
83.52\% on the official test set
Teaching Pretrained Models with Commonsense Reasoning: A Preliminary KB-Based Approach
Recently, pretrained language models (e.g., BERT) have achieved great success
on many downstream natural language understanding tasks and exhibit a certain
level of commonsense reasoning ability. However, their performance on
commonsense tasks is still far from that of humans. As a preliminary attempt,
we propose a simple yet effective method to teach pretrained models with
commonsense reasoning by leveraging the structured knowledge in ConceptNet, the
largest commonsense knowledge base (KB). Specifically, the structured knowledge
in KB allows us to construct various logical forms, and then generate
multiple-choice questions requiring commonsense logical reasoning. Experimental
results demonstrate that, when refined on these training examples, the
pretrained models consistently improve their performance on tasks that require
commonsense reasoning, especially in the few-shot learning setting. Besides, we
also perform analysis to understand which logical relations are more relevant
to commonsense reasoning