12 research outputs found
Insufficient Data Can Also Rock! Learning to Converse Using Smaller Data with Augmentation
Recent successes of open-domain dialogue generation mainly rely on the advances of deep neural networks. The effectiveness of deep neural network models depends on the amount of training data. As it is laboursome and expensive to acquire a huge amount of data in most scenarios, how to effectively utilize existing data is the crux of this issue. In this paper, we use data augmentation techniques to improve the performance of neural dialogue models on the condition of insufficient data. Specifically, we propose a novel generative model to augment existing data, where the conditional variational autoencoder (CVAE) is employed as the generator to output more training data with diversified expressions. To improve the correlation of each augmented training pair, we design a discriminator with adversarial training to supervise the augmentation process. Moreover, we thoroughly investigate various data augmentation schemes for neural dialogue system with generative models, both GAN and CVAE. Experimental results on two open corpora, Weibo and Twitter, demonstrate the superiority of our proposed data augmentation model
Controllable Paraphrase Generation with a Syntactic Exemplar
Prior work on controllable text generation usually assumes that the
controlled attribute can take on one of a small set of values known a priori.
In this work, we propose a novel task, where the syntax of a generated sentence
is controlled rather by a sentential exemplar. To evaluate quantitatively with
standard metrics, we create a novel dataset with human annotations. We also
develop a variational model with a neural module specifically designed for
capturing syntactic knowledge and several multitask training objectives to
promote disentangled representation learning. Empirically, the proposed model
is observed to achieve improvements over baselines and learn to capture
desirable characteristics.Comment: ACL 2019 Lon