228 research outputs found
Adversarial Learning for Neural Dialogue Generation
In this paper, drawing intuition from the Turing test, we propose using
adversarial training for open-domain dialogue generation: the system is trained
to produce sequences that are indistinguishable from human-generated dialogue
utterances. We cast the task as a reinforcement learning (RL) problem where we
jointly train two systems, a generative model to produce response sequences,
and a discriminator---analagous to the human evaluator in the Turing test--- to
distinguish between the human-generated dialogues and the machine-generated
ones. The outputs from the discriminator are then used as rewards for the
generative model, pushing the system to generate dialogues that mostly resemble
human dialogues.
In addition to adversarial training we describe a model for adversarial {\em
evaluation} that uses success in fooling an adversary as a dialogue evaluation
metric, while avoiding a number of potential pitfalls. Experimental results on
several metrics, including adversarial evaluation, demonstrate that the
adversarially-trained system generates higher-quality responses than previous
baselines
Semi-supervised Text Regression with Conditional Generative Adversarial Networks
Enormous online textual information provides intriguing opportunities for
understandings of social and economic semantics. In this paper, we propose a
novel text regression model based on a conditional generative adversarial
network (GAN), with an attempt to associate textual data and social outcomes in
a semi-supervised manner. Besides promising potential of predicting
capabilities, our superiorities are twofold: (i) the model works with
unbalanced datasets of limited labelled data, which align with real-world
scenarios; and (ii) predictions are obtained by an end-to-end framework,
without explicitly selecting high-level representations. Finally we point out
related datasets for experiments and future research directions
Text Generation Based on Generative Adversarial Nets with Latent Variable
In this paper, we propose a model using generative adversarial net (GAN) to
generate realistic text. Instead of using standard GAN, we combine variational
autoencoder (VAE) with generative adversarial net. The use of high-level latent
random variables is helpful to learn the data distribution and solve the
problem that generative adversarial net always emits the similar data. We
propose the VGAN model where the generative model is composed of recurrent
neural network and VAE. The discriminative model is a convolutional neural
network. We train the model via policy gradient. We apply the proposed model to
the task of text generation and compare it to other recent neural network based
models, such as recurrent neural network language model and SeqGAN. We evaluate
the performance of the model by calculating negative log-likelihood and the
BLEU score. We conduct experiments on three benchmark datasets, and results
show that our model outperforms other previous models
- …