762 research outputs found
Improving Variational Encoder-Decoders in Dialogue Generation
Variational encoder-decoders (VEDs) have shown promising results in dialogue
generation. However, the latent variable distributions are usually approximated
by a much simpler model than the powerful RNN structure used for encoding and
decoding, yielding the KL-vanishing problem and inconsistent training
objective. In this paper, we separate the training step into two phases: The
first phase learns to autoencode discrete texts into continuous embeddings,
from which the second phase learns to generalize latent representations by
reconstructing the encoded embedding. In this case, latent variables are
sampled by transforming Gaussian noise through multi-layer perceptrons and are
trained with a separate VED model, which has the potential of realizing a much
more flexible distribution. We compare our model with current popular models
and the experiment demonstrates substantial improvement in both metric-based
and human evaluations.Comment: Accepted by AAAI201
- …