Current captioning approaches tend to generate correct but "generic"
descriptions that lack real-world knowledge, e.g., named entities and
contextual information. Considering that Vision-Language Pre-Training (VLP)
models master massive such knowledge from large-scale web-harvested data, it is
promising to utilize the generalizability of VLP models to incorporate
knowledge into image descriptions. However, using VLP models faces challenges:
zero-shot inference suffers from knowledge hallucination that leads to
low-quality descriptions, but the generic bias in downstream task fine-tuning
hinders the VLP model from expressing knowledge. To address these concerns, we
propose a simple yet effective method called Knowledge-guided Replay
(K-Replay), which enables the retention of pre-training knowledge during
fine-tuning. Our approach consists of two parts: (1) a knowledge prediction
task on automatically collected replay exemplars to continuously awaken the VLP
model's memory about knowledge, thus preventing the model from collapsing into
the generic pattern; (2) a knowledge distillation constraint to improve the
faithfulness of generated descriptions hence alleviating the knowledge
hallucination. To evaluate knowledge-enhanced descriptions, we construct a
novel captioning benchmark KnowCap, containing knowledge of landmarks, famous
brands, special foods and movie characters. Experimental results show that our
approach effectively incorporates knowledge into descriptions, outperforming
strong VLP baseline by 20.9 points (78.7->99.6) in CIDEr score and 20.5
percentage points (34.0%->54.5%) in knowledge recognition accuracy. Our code
and data is available at https://github.com/njucckevin/KnowCap.Comment: Accepted at ACM Multimedia (ACMMM) 202