1 research outputs found
Adding A Filter Based on The Discriminator to Improve Unconditional Text Generation
The autoregressive language model (ALM) trained with maximum likelihood
estimation (MLE) is widely used in unconditional text generation. Due to
exposure bias, the generated texts still suffer from low quality and diversity.
This presents statistically as a discrepancy between the real text and
generated text. Some research shows a discriminator can detect this
discrepancy. Because the discriminator can encode more information than the
generator, discriminator has the potentiality to improve generator. To
alleviate the exposure bias, generative adversarial networks (GAN) use the
discriminator to update the generator's parameters directly, but they fail by
being evaluated precisely. A critical reason for the failure is the difference
between the discriminator input and the ALM input. We propose a novel mechanism
by adding a filter which has the same input as the discriminator. First,
discriminator detects the discrepancy signals and passes to filter directly (or
by learning). Then, we use the filter to reject some generated samples with a
sampling-based method. Thus, the original generative distribution is revised to
reduce the discrepancy. Two ALMs, RNN-based and Transformer-based, are
experimented. Evaluated precisely by three metrics, our mechanism consistently
outperforms the ALMs and all kinds of GANs across two benchmark data sets