Curriculum learning has shown promising improvements in multiple domains by
training machine learning models from easy samples to hard ones. Previous works
which either design rules or train models for scoring the difficulty highly
rely on task-specific expertise, and cannot generalize. Inspired by the
``easy-to-hard'' intuition, we propose to do in-sample curriculum learning for
natural language generation tasks. Our learning strategy starts training the
model to generate the last few words, i.e., do sequence completion, and
gradually extends to generate the whole output sequence. Comprehensive
experiments show that it generalizes well to different tasks and achieves
significant improvements over strong baselines