3,028 research outputs found
Incorporating Structured Commonsense Knowledge in Story Completion
The ability to select an appropriate story ending is the first step towards
perfect narrative comprehension. Story ending prediction requires not only the
explicit clues within the context, but also the implicit knowledge (such as
commonsense) to construct a reasonable and consistent story. However, most
previous approaches do not explicitly use background commonsense knowledge. We
present a neural story ending selection model that integrates three types of
information: narrative sequence, sentiment evolution and commonsense knowledge.
Experiments show that our model outperforms state-of-the-art approaches on a
public dataset, ROCStory Cloze Task , and the performance gain from adding the
additional commonsense knowledge is significant
CARE: Commonsense-Aware Emotional Response Generation with Latent Concepts
Rationality and emotion are two fundamental elements of humans. Endowing
agents with rationality and emotion has been one of the major milestones in AI.
However, in the field of conversational AI, most existing models only
specialize in one aspect and neglect the other, which often leads to dull or
unrelated responses. In this paper, we hypothesize that combining rationality
and emotion into conversational agents can improve response quality. To test
the hypothesis, we focus on one fundamental aspect of rationality, i.e.,
commonsense, and propose CARE, a novel model for commonsense-aware emotional
response generation. Specifically, we first propose a framework to learn and
construct commonsense-aware emotional latent concepts of the response given an
input message and a desired emotion. We then propose three methods to
collaboratively incorporate the latent concepts into response generation.
Experimental results on two large-scale datasets support our hypothesis and
show that our model can produce more accurate and commonsense-aware emotional
responses and achieve better human ratings than state-of-the-art models that
only specialize in one aspect.Comment: AAAI-202
Story Ending Generation with Incremental Encoding and Commonsense Knowledge
Generating a reasonable ending for a given story context, i.e., story ending
generation, is a strong indication of story comprehension. This task requires
not only to understand the context clues which play an important role in
planning the plot but also to handle implicit knowledge to make a reasonable,
coherent story.
In this paper, we devise a novel model for story ending generation. The model
adopts an incremental encoding scheme to represent context clues which are
spanning in the story context. In addition, commonsense knowledge is applied
through multi-source attention to facilitate story comprehension, and thus to
help generate coherent and reasonable endings. Through building context clues
and using implicit knowledge, the model is able to produce reasonable story
endings. context clues implied in the post and make the inference based on it.
Automatic and manual evaluation shows that our model can generate more
reasonable story endings than state-of-the-art baselines.Comment: Accepted in AAAI201
Keyword-Guided Neural Conversational Model
We study the problem of imposing conversational goals/keywords on open-domain
conversational agents, where the agent is required to lead the conversation to
a target keyword smoothly and fast. Solving this problem enables the
application of conversational agents in many real-world scenarios, e.g.,
recommendation and psychotherapy. The dominant paradigm for tackling this
problem is to 1) train a next-turn keyword classifier, and 2) train a
keyword-augmented response retrieval model. However, existing approaches in
this paradigm have two limitations: 1) the training and evaluation datasets for
next-turn keyword classification are directly extracted from conversations
without human annotations, thus, they are noisy and have low correlation with
human judgements, and 2) during keyword transition, the agents solely rely on
the similarities between word embeddings to move closer to the target keyword,
which may not reflect how humans converse. In this paper, we assume that human
conversations are grounded on commonsense and propose a keyword-guided neural
conversational model that can leverage external commonsense knowledge graphs
(CKG) for both keyword transition and response retrieval. Automatic evaluations
suggest that commonsense improves the performance of both next-turn keyword
prediction and keyword-augmented response retrieval. In addition, both
self-play and human evaluations show that our model produces responses with
smoother keyword transition and reaches the target keyword faster than
competitive baselines.Comment: AAAI-202
- …