3,190 research outputs found
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in
modeling open-domain conversations, they often generate dull and generic
responses. Unlike past work that has focused on diversifying the output of the
decoder at word-level to alleviate this problem, we present a novel framework
based on conditional variational autoencoders that captures the discourse-level
diversity in the encoder. Our model uses latent variables to learn a
distribution over potential conversational intents and generates diverse
responses using only greedy decoders. We have further developed a novel variant
that is integrated with linguistic prior knowledge for better performance.
Finally, the training procedure is improved by introducing a bag-of-word loss.
Our proposed models have been validated to generate significantly more diverse
responses than baseline approaches and exhibit competence in discourse-level
decision-making.Comment: Appeared in ACL2017 proceedings as a long paper. Correct a
calculation mistake in Table 1 E-bow & A-bow and results into higher score
Diversifying Question Generation over Knowledge Base via External Natural Questions
Previous methods on knowledge base question generation (KBQG) primarily focus
on enhancing the quality of a single generated question. Recognizing the
remarkable paraphrasing ability of humans, we contend that diverse texts should
convey the same semantics through varied expressions. The above insights make
diversifying question generation an intriguing task, where the first challenge
is evaluation metrics for diversity. Current metrics inadequately assess the
above diversity since they calculate the ratio of unique n-grams in the
generated question itself, which leans more towards measuring duplication
rather than true diversity. Accordingly, we devise a new diversity evaluation
metric, which measures the diversity among top-k generated questions for each
instance while ensuring their relevance to the ground truth. Clearly, the
second challenge is how to enhance diversifying question generation. To address
this challenge, we introduce a dual model framework interwoven by two selection
strategies to generate diverse questions leveraging external natural questions.
The main idea of our dual framework is to extract more diverse expressions and
integrate them into the generation model to enhance diversifying question
generation. Extensive experiments on widely used benchmarks for KBQG
demonstrate that our proposed approach generates highly diverse questions and
improves the performance of question answering tasks.Comment: 12 pages, 2 figure
- …