Generating paraphrases from given sentences involves decoding words step by
step from a large vocabulary. To learn a decoder, supervised learning which
maximizes the likelihood of tokens always suffers from the exposure bias.
Although both reinforcement learning (RL) and imitation learning (IL) have been
widely used to alleviate the bias, the lack of direct comparison leads to only
a partial image on their benefits. In this work, we present an empirical study
on how RL and IL can help boost the performance of generating paraphrases, with
the pointer-generator as a base model. Experiments on the benchmark datasets
show that (1) imitation learning is constantly better than reinforcement
learning; and (2) the pointer-generator models with imitation learning
outperform the state-of-the-art methods with a large margin.Comment: 9 pages, 2 figures, EMNLP201