34th International Conference on Machine Learning, ICML 2017
Doi
Abstract
This paper proposes a general method for improving the structure and quality
of sequences generated by a recurrent neural network (RNN), while maintaining
information originally learned from data, as well as sample diversity. An RNN
is first pre-trained on data using maximum likelihood estimation (MLE), and the
probability distribution over the next token in the sequence learned by this
model is treated as a prior policy. Another RNN is then trained using
reinforcement learning (RL) to generate higher-quality outputs that account for
domain-specific incentives while retaining proximity to the prior policy of the
MLE RNN. To formalize this objective, we derive novel off-policy RL methods for
RNNs from KL-control. The effectiveness of the approach is demonstrated on two
applications; 1) generating novel musical melodies, and 2) computational
molecular generation. For both problems, we show that the proposed method
improves the desired properties and structure of the generated sequences, while
maintaining information learned from data