Missing sentence generation (or sentence infilling) fosters a wide range of
applications in natural language generation, such as document auto-completion
and meeting note expansion. This task asks the model to generate intermediate
missing sentences that can syntactically and semantically bridge the
surrounding context. Solving the sentence infilling task requires techniques in
natural language processing ranging from understanding to discourse-level
planning to generation. In this paper, we propose a framework to decouple the
challenge and address these three aspects respectively, leveraging the power of
existing large-scale pre-trained models such as BERT and GPT-2. We empirically
demonstrate the effectiveness of our model in learning a sentence
representation for generation and further generating a missing sentence that
fits the context.Comment: Y.H. and Y.Z. contributed equally to this work. v2: published version
with updated results and reference