59 research outputs found
MultiMedEval: A Benchmark and a Toolkit for Evaluating Medical Vision-Language Models
We introduce MultiMedEval, an open-source toolkit for fair and reproducible
evaluation of large, medical vision-language models (VLM). MultiMedEval
comprehensively assesses the models' performance on a broad array of six
multi-modal tasks, conducted over 23 datasets, and spanning over 11 medical
domains. The chosen tasks and performance metrics are based on their widespread
adoption in the community and their diversity, ensuring a thorough evaluation
of the model's overall generalizability. We open-source a Python toolkit
(github.com/corentin-ryr/MultiMedEval) with a simple interface and setup
process, enabling the evaluation of any VLM in just a few lines of code. Our
goal is to simplify the intricate landscape of VLM evaluation, thus promoting
fair and uniform benchmarking of future models.Comment: Under review at MIDL 202
- …