The issue of factual consistency in abstractive summarization has received
extensive attention in recent years, and the evaluation of factual consistency
between summary and document has become an important and urgent task. Most of
the current evaluation metrics are adopted from the question answering (QA) or
natural language inference (NLI) task. However, the application of QA-based
metrics is extremely time-consuming in practice while NLI-based metrics are
lack of interpretability. In this paper, we propose a cloze-based evaluation
framework called ClozE and show the great potential of the cloze-based metric.
It inherits strong interpretability from QA, while maintaining the speed of
NLI- level reasoning. We demonstrate that ClozE can reduce the evaluation time
by nearly 96% relative to QA-based metrics while retaining their
interpretability and performance through experiments on six human-annotated
datasets and a meta-evaluation benchmark GO FIGURE (Gabriel et al., 2021).
Finally, we discuss three important facets of ClozE in practice, which further
shows better overall performance of ClozE compared to other metrics.Comment: The manuscript for JAI