Despite the prevalence of pretrained language models in natural language
understanding tasks, understanding lengthy text such as document is still
challenging due to the data sparseness problem. Inspired by that humans develop
their ability of understanding lengthy text from reading shorter text, we
propose a simple yet effective summarization-based data augmentation, SUMMaug,
for document classification. We first obtain easy-to-learn examples for the
target document classification task by summarizing the input of the original
training examples, while optionally merging the original labels to conform to
the summarized input. We then use the generated pseudo examples to perform
curriculum learning. Experimental results on two datasets confirmed the
advantage of our method compared to existing baseline methods in terms of
robustness and accuracy. We release our code and data at
https://github.com/etsurin/summaug.Comment: The 4th New Frontiers in Summarization (with LLMs) Worksho