275 research outputs found
Story Ending Generation with Incremental Encoding and Commonsense Knowledge
Generating a reasonable ending for a given story context, i.e., story ending
generation, is a strong indication of story comprehension. This task requires
not only to understand the context clues which play an important role in
planning the plot but also to handle implicit knowledge to make a reasonable,
coherent story.
In this paper, we devise a novel model for story ending generation. The model
adopts an incremental encoding scheme to represent context clues which are
spanning in the story context. In addition, commonsense knowledge is applied
through multi-source attention to facilitate story comprehension, and thus to
help generate coherent and reasonable endings. Through building context clues
and using implicit knowledge, the model is able to produce reasonable story
endings. context clues implied in the post and make the inference based on it.
Automatic and manual evaluation shows that our model can generate more
reasonable story endings than state-of-the-art baselines.Comment: Accepted in AAAI201
A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics, and Benchmark Datasets
Machine Reading Comprehension (MRC) is a challenging NLP research field with
wide real world applications. The great progress of this field in recent years
is mainly due to the emergence of large-scale datasets and deep learning. At
present, a lot of MRC models have already surpassed the human performance on
many datasets despite the obvious giant gap between existing MRC models and
genuine human-level reading comprehension. This shows the need of improving
existing datasets, evaluation metrics and models to move the MRC models toward
'real' understanding. To address this lack of comprehensive survey of existing
MRC tasks, evaluation metrics and datasets, herein, (1) we analyzed 57 MRC
tasks and datasets; proposed a more precise classification method of MRC tasks
with 4 different attributes (2) we summarized 9 evaluation metrics of MRC tasks
and (3) 7 attributes and 10 characteristics of MRC datasets; (4) We also
discussed some open issues in MRC research and highlight some future research
directions. In addition, to help the community, we have collected, organized,
and published our data on a companion website(https://mrc-datasets.github.io/)
where MRC researchers could directly access each MRC dataset, papers, baseline
projects and browse the leaderboard.Comment: 59 page
ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning
We present ATOMIC, an atlas of everyday commonsense reasoning, organized
through 877k textual descriptions of inferential knowledge. Compared to
existing resources that center around taxonomic knowledge, ATOMIC focuses on
inferential knowledge organized as typed if-then relations with variables
(e.g., "if X pays Y a compliment, then Y will likely return the compliment").
We propose nine if-then relation types to distinguish causes vs. effects,
agents vs. themes, voluntary vs. involuntary events, and actions vs. mental
states. By generatively training on the rich inferential knowledge described in
ATOMIC, we show that neural models can acquire simple commonsense capabilities
and reason about previously unseen events. Experimental results demonstrate
that multitask models that incorporate the hierarchical structure of if-then
relation types lead to more accurate inference compared to models trained in
isolation, as measured by both automatic and human evaluation.Comment: AAAI 2019 C
- …