324 research outputs found
Improving Variational Encoder-Decoders in Dialogue Generation
Variational encoder-decoders (VEDs) have shown promising results in dialogue
generation. However, the latent variable distributions are usually approximated
by a much simpler model than the powerful RNN structure used for encoding and
decoding, yielding the KL-vanishing problem and inconsistent training
objective. In this paper, we separate the training step into two phases: The
first phase learns to autoencode discrete texts into continuous embeddings,
from which the second phase learns to generalize latent representations by
reconstructing the encoded embedding. In this case, latent variables are
sampled by transforming Gaussian noise through multi-layer perceptrons and are
trained with a separate VED model, which has the potential of realizing a much
more flexible distribution. We compare our model with current popular models
and the experiment demonstrates substantial improvement in both metric-based
and human evaluations.Comment: Accepted by AAAI201
Partially Randomizing Transformer Weights for Dialogue Response Diversity
Despite recent progress in generative open-domain dialogue, the issue of low
response diversity persists. Prior works have addressed this issue via either
novel objective functions, alternative learning approaches such as variational
frameworks, or architectural extensions such as the Randomized Link (RL)
Transformer. However, these approaches typically entail either additional
difficulties during training/inference, or a significant increase in model size
and complexity. Hence, we propose the \underline{Pa}rtially
\underline{Ra}ndomized trans\underline{Former} (PaRaFormer), a simple extension
of the transformer which involves freezing the weights of selected layers after
random initialization. Experimental results reveal that the performance of the
PaRaformer is comparable to that of the aforementioned approaches, despite not
entailing any additional training difficulty or increase in model complexity
Data Augmentation for Spoken Language Understanding via Joint Variational Generation
Data scarcity is one of the main obstacles of domain adaptation in spoken
language understanding (SLU) due to the high cost of creating manually tagged
SLU datasets. Recent works in neural text generative models, particularly
latent variable models such as variational autoencoder (VAE), have shown
promising results in regards to generating plausible and natural sentences. In
this paper, we propose a novel generative architecture which leverages the
generative power of latent variable models to jointly synthesize fully
annotated utterances. Our experiments show that existing SLU models trained on
the additional synthetic examples achieve performance gains. Our approach not
only helps alleviate the data scarcity issue in the SLU task for many datasets
but also indiscriminately improves language understanding performances for
various SLU models, supported by extensive experiments and rigorous statistical
testing.Comment: 8 pages, 3 figures, 4 tables, Accepted in AAAI201
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in
modeling open-domain conversations, they often generate dull and generic
responses. Unlike past work that has focused on diversifying the output of the
decoder at word-level to alleviate this problem, we present a novel framework
based on conditional variational autoencoders that captures the discourse-level
diversity in the encoder. Our model uses latent variables to learn a
distribution over potential conversational intents and generates diverse
responses using only greedy decoders. We have further developed a novel variant
that is integrated with linguistic prior knowledge for better performance.
Finally, the training procedure is improved by introducing a bag-of-word loss.
Our proposed models have been validated to generate significantly more diverse
responses than baseline approaches and exhibit competence in discourse-level
decision-making.Comment: Appeared in ACL2017 proceedings as a long paper. Correct a
calculation mistake in Table 1 E-bow & A-bow and results into higher score
Unsupervised Abstractive Dialogue Summarization for Tete-a-Tetes
High-quality dialogue-summary paired data is expensive to produce and
domain-sensitive, making abstractive dialogue summarization a challenging task.
In this work, we propose the first unsupervised abstractive dialogue
summarization model for tete-a-tetes (SuTaT). Unlike standard text
summarization, a dialogue summarization method should consider the
multi-speaker scenario where the speakers have different roles, goals, and
language styles. In a tete-a-tete, such as a customer-agent conversation, SuTaT
aims to summarize for each speaker by modeling the customer utterances and the
agent utterances separately while retaining their correlations. SuTaT consists
of a conditional generative module and two unsupervised summarization modules.
The conditional generative module contains two encoders and two decoders in a
variational autoencoder framework where the dependencies between two latent
spaces are captured. With the same encoders and decoders, two unsupervised
summarization modules equipped with sentence-level self-attention mechanisms
generate summaries without using any annotations. Experimental results show
that SuTaT is superior on unsupervised dialogue summarization for both
automatic and human evaluations, and is capable of dialogue classification and
single-turn conversation generation
- …