Despite the rapid development of Chinese vision-language models (VLMs), most
existing Chinese vision-language (VL) datasets are constructed on
Western-centric images from existing English VL datasets. The cultural bias in
the images makes these datasets unsuitable for evaluating VLMs in Chinese
culture. To remedy this issue, we present a new Chinese Vision- Language
Understanding Evaluation (CVLUE) benchmark dataset, where the selection of
object categories and images is entirely driven by Chinese native speakers,
ensuring that the source images are representative of Chinese culture. The
benchmark contains four distinct VL tasks ranging from image-text retrieval to
visual question answering, visual grounding and visual dialogue. We present a
detailed statistical analysis of CVLUE and provide a baseline performance
analysis with several open-source multilingual VLMs on CVLUE and its English
counterparts to reveal their performance gap between English and Chinese. Our
in-depth category-level analysis reveals a lack of Chinese cultural knowledge
in existing VLMs. We also find that fine-tuning on Chinese culture-related VL
datasets effectively enhances VLMs' understanding of Chinese culture