4 research outputs found
LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching
Obtaining large pre-trained models that can be fine-tuned to new tasks with
limited annotated samples has remained an open challenge for medical imaging
data. While pre-trained deep networks on ImageNet and vision-language
foundation models trained on web-scale data are prevailing approaches, their
effectiveness on medical tasks is limited due to the significant domain shift
between natural and medical images. To bridge this gap, we introduce LVM-Med,
the first family of deep networks trained on large-scale medical datasets. We
have collected approximately 1.3 million medical images from 55 publicly
available datasets, covering a large number of organs and modalities such as
CT, MRI, X-ray, and Ultrasound. We benchmark several state-of-the-art
self-supervised algorithms on this dataset and propose a novel self-supervised
contrastive learning algorithm using a graph-matching formulation. The proposed
approach makes three contributions: (i) it integrates prior pair-wise image
similarity metrics based on local and global information; (ii) it captures the
structural constraints of feature embeddings through a loss function
constructed via a combinatorial graph-matching objective; and (iii) it can be
trained efficiently end-to-end using modern gradient-estimation techniques for
black-box solvers. We thoroughly evaluate the proposed LVM-Med on 15 downstream
medical tasks ranging from segmentation and classification to object detection,
and both for the in and out-of-distribution settings. LVM-Med empirically
outperforms a number of state-of-the-art supervised, self-supervised, and
foundation models. For challenging tasks such as Brain Tumor Classification or
Diabetic Retinopathy Grading, LVM-Med improves previous vision-language models
trained on 1 billion masks by 6-7% while using only a ResNet-50.Comment: Update Appendi
On the Out of Distribution Robustness of Foundation Models in Medical Image Segmentation
Constructing a robust model that can effectively generalize to test samples
under distribution shifts remains a significant challenge in the field of
medical imaging. The foundational models for vision and language, pre-trained
on extensive sets of natural image and text data, have emerged as a promising
approach. It showcases impressive learning abilities across different tasks
with the need for only a limited amount of annotated samples. While numerous
techniques have focused on developing better fine-tuning strategies to adapt
these models for specific domains, we instead examine their robustness to
domain shifts in the medical image segmentation task. To this end, we compare
the generalization performance to unseen domains of various pre-trained models
after being fine-tuned on the same in-distribution dataset and show that
foundation-based models enjoy better robustness than other architectures. From
here, we further developed a new Bayesian uncertainty estimation for frozen
models and used them as an indicator to characterize the model's performance on
out-of-distribution (OOD) data, proving particularly beneficial for real-world
applications. Our experiments not only reveal the limitations of current
indicators like accuracy on the line or agreement on the line commonly used in
natural image applications but also emphasize the promise of the introduced
Bayesian uncertainty. Specifically, lower uncertainty predictions usually tend
to higher out-of-distribution (OOD) performance.Comment: Advances in Neural Information Processing Systems (NeurIPS) 2023,
Workshop on robustness of zero/few-shot learning in foundation model
An Outbreak of Severe Infections with Community-Acquired MRSA Carrying the Panton-Valentine Leukocidin Following Vaccination
Background: Infections with community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA) are emerging
worldwide. We investigated an outbreak of severe CA-MRSA infections in children following out-patient vaccination.
Methods and Findings: We carried out a field investigation after adverse events following immunization (AEFI) were reported. We reviewed the clinical data from all cases. S. aureus recovered from skin infections and from nasal and throat swabs were analyzed by pulse-field gel electrophoresis, multi locus sequence typing, PCR and microarray. In May 2006, nine children presented with AEFI, ranging from fatal toxic shock syndrome, necrotizing soft tissue infection, purulent abscesses, to fever
with rash. All had received a vaccination injection in different health centres in one District of Ho Chi Minh City. Eight children had been vaccinated by the same health care worker (HCW). Deficiencies in vaccine quality, storage practices, or preparation and delivery were not found. Infection control practices were insufficient. CA-MRSA was cultured in four children and from nasal and throat swabs from the HCW. Strains from children and HCW were indistinguishable. All carried the Panton-Valentine leukocidine (PVL), the staphylococcal enterotoxin B gene, the gene complex for staphylococcal-cassette-chromosome mec type V, and were sequence type 59. Strain HCM3A is epidemiologically unrelated to a strain of ST59 prevalent in the USA, althoughthey belong to the same lineage.
Conclusions. We describe an outbreak of infections with CA-MRSA in children, transmitted by an asymptomatic colonized HCW during immunization injection. Consistent adherence to injection practice guidelines is needed to prevent CA-MRSA transmission in both in- and outpatient settings