Multi-modal information retrieval (MMIR) is a rapidly evolving field, where
significant progress, particularly in image-text pairing, has been made through
advanced representation learning and cross-modality alignment research.
However, current benchmarks for evaluating MMIR performance in image-text
pairing within the scientific domain show a notable gap, where chart and table
images described in scholarly language usually do not play a significant role.
To bridge this gap, we develop a specialised scientific MMIR (SciMMIR)
benchmark by leveraging open-access paper collections to extract data relevant
to the scientific domain. This benchmark comprises 530K meticulously curated
image-text pairs, extracted from figures and tables with detailed captions in
scientific documents. We further annotate the image-text pairs with two-level
subset-subcategory hierarchy annotations to facilitate a more comprehensive
evaluation of the baselines. We conducted zero-shot and fine-tuning evaluations
on prominent multi-modal image-captioning and visual language models, such as
CLIP and BLIP. Our analysis offers critical insights for MMIR in the scientific
domain, including the impact of pre-training and fine-tuning settings and the
influence of the visual and textual encoders. All our data and checkpoints are
publicly available at https://github.com/Wusiwei0410/SciMMIR.Comment: camera-ready version for ACL 2024 Finding