86,771 research outputs found
A Knowledge-Grounded Multimodal Search-Based Conversational Agent
Multimodal search-based dialogue is a challenging new task: It extends
visually grounded question answering systems into multi-turn conversations with
access to an external database. We address this new challenge by learning a
neural response generation system from the recently released Multimodal
Dialogue (MMD) dataset (Saha et al., 2017). We introduce a knowledge-grounded
multimodal conversational model where an encoded knowledge base (KB)
representation is appended to the decoder input. Our model substantially
outperforms strong baselines in terms of text-based similarity measures (over 9
BLEU points, 3 of which are solely due to the use of additional information
from the KB
NEXUS Network: Connecting the Preceding and the Following in Dialogue Generation
Sequence-to-Sequence (seq2seq) models have become overwhelmingly popular in
building end-to-end trainable dialogue systems. Though highly efficient in
learning the backbone of human-computer communications, they suffer from the
problem of strongly favoring short generic responses. In this paper, we argue
that a good response should smoothly connect both the preceding dialogue
history and the following conversations. We strengthen this connection through
mutual information maximization. To sidestep the non-differentiability of
discrete natural language tokens, we introduce an auxiliary continuous code
space and map such code space to a learnable prior distribution for generation
purpose. Experiments on two dialogue datasets validate the effectiveness of our
model, where the generated responses are closely related to the dialogue
context and lead to more interactive conversations.Comment: Accepted by EMNLP201
Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings
We study a symmetric collaborative dialogue setting in which two agents, each
with private knowledge, must strategically communicate to achieve a common
goal. The open-ended dialogue state in this setting poses new challenges for
existing dialogue systems. We collected a dataset of 11K human-human dialogues,
which exhibits interesting lexical, semantic, and strategic elements. To model
both structured knowledge and unstructured language, we propose a neural model
with dynamic knowledge graph embeddings that evolve as the dialogue progresses.
Automatic and human evaluations show that our model is both more effective at
achieving the goal and more human-like than baseline neural and rule-based
models.Comment: ACL 201
- …