Recent pre-trained language models (PLMs) achieve promising results in
existing abstractive summarization datasets. However, existing summarization
benchmarks overlap in time with the standard pre-training corpora and
finetuning datasets. Hence, the strong performance of PLMs may rely on the
parametric knowledge that is memorized during pre-training and fine-tuning.
Moreover, the knowledge memorized by PLMs may quickly become outdated, which
affects the generalization performance of PLMs on future data. In this work, we
propose TempoSum, a novel benchmark that contains data samples from 2010 to
2022, to understand the temporal generalization ability of abstractive
summarization models. Through extensive human evaluation, we show that
parametric knowledge stored in summarization models significantly affects the
faithfulness of the generated summaries on future data. Moreover, existing
faithfulness enhancement methods cannot reliably improve the faithfulness of
summarization models on future data. Finally, we discuss several
recommendations to the research community on how to evaluate and improve the
temporal generalization capability of text summarization models.Comment: Accepted at EMNLP 202