4 research outputs found
Vision-Language Modelling For Radiological Imaging and Reports In The Low Data Regime
This paper explores training medical vision-language models (VLMs) -- where
the visual and language inputs are embedded into a common space -- with a
particular focus on scenarios where training data is limited, as is often the
case in clinical datasets. We explore several candidate methods to improve
low-data performance, including: (i) adapting generic pre-trained models to
novel image and text domains (i.e. medical imaging and reports) via unimodal
self-supervision; (ii) using local (e.g. GLoRIA) & global (e.g. InfoNCE)
contrastive loss functions as well as a combination of the two; (iii) extra
supervision during VLM training, via: (a) image- and text-only
self-supervision, and (b) creating additional positive image-text pairs for
training through augmentation and nearest-neighbour search.
Using text-to-image retrieval as a benchmark, we evaluate the performance of
these methods with variable sized training datasets of paired chest X-rays and
radiological reports. Combined, they significantly improve retrieval compared
to fine-tuning CLIP, roughly equivalent to training with the data. A similar
pattern is found in the downstream task classification of CXR-related
conditions with our method outperforming CLIP and also BioVIL, a strong CXR VLM
benchmark, in the zero-shot and linear probing settings. We conclude with a set
of recommendations for researchers aiming to train vision-language models on
other medical imaging modalities when training data is scarce. To facilitate
further research, we will make our code and models publicly available.Comment: Accepted to MIDL 202
External validation of SpineNet, an open-source deep learning model for grading lumbar disk degeneration MRI features, using the Northern Finland Birth Cohort 1966
Abstract
Study Design. This is a retrospective observational study to externally validate a deep learning image classification model.
Objective. Deep learning models such as SpineNet offer the possibility of automating the process of disk degeneration (DD) classification from magnetic resonance imaging (MRI). External validation is an essential step to their development. The aim of this study was to externally validate SpineNet predictions for DD using Pfirrmann classification and Modic changes (MCs) on data from the Northern Finland Birth Cohort 1966 (NFBC1966).
Summary of Data. We validated SpineNet using data from 1331 NFBC1966 participants for whom both lumbar spine MRI data and consensus DD gradings were available.
Materials and Methods. SpineNet returned Pfirrmann grade and MC presence from T2-weighted sagittal lumbar MRI sequences from NFBC1966, a data set geographically and temporally separated from its training data set. A range of agreement and reliability metrics were used to compare predictions with expert radiologists. Subsets of data that match SpineNet training data more closely were also tested.
Results. Balanced accuracy for DD was 78% (77%–79%) and for MC 86% (85%–86%). Interrater reliability for Pfirrmann grading was Lin concordance correlation coefficient=0.86 (0.85–0.87) and Cohen κ=0.68 (0.67–0.69). In a low back pain subset, these reliability metrics remained largely unchanged. In total, 20.83% of disks were rated differently by SpineNet compared with the human raters, but only 0.85% of disks had a grade difference >1. Interrater reliability for MC detection was κ=0.74 (0.72–0.75). In the low back pain subset, this metric was almost unchanged at κ=0.76 (0.73–0.79).
Conclusions. In this study, SpineNet has been benchmarked against expert human raters in the research setting. It has matched human reliability and demonstrates robust performance despite the multiple challenges facing model generalizability