1 research outputs found
Segment Anything Model for Medical Images?
The Segment Anything Model (SAM) is the first foundation model for general
image segmentation. It designed a novel promotable segmentation task, ensuring
zero-shot image segmentation using the pre-trained model via two main modes
including automatic everything and manual prompt. SAM has achieved impressive
results on various natural image segmentation tasks. However, medical image
segmentation (MIS) is more challenging due to the complex modalities, fine
anatomical structures, uncertain and complex object boundaries, and wide-range
object scales. SAM has achieved impressive results on various natural image
segmentation tasks. Meanwhile, zero-shot and efficient MIS can well reduce the
annotation time and boost the development of medical image analysis. Hence, SAM
seems to be a potential tool and its performance on large medical datasets
should be further validated. We collected and sorted 52 open-source datasets,
and build a large medical segmentation dataset with 16 modalities, 68 objects,
and 553K slices. We conducted a comprehensive analysis of different SAM testing
strategies on the so-called COSMOS 553K dataset. Extensive experiments validate
that SAM performs better with manual hints like points and boxes for object
perception in medical images, leading to better performance in prompt mode
compared to everything mode. Additionally, SAM shows remarkable performance in
some specific objects and modalities, but is imperfect or even totally fails in
other situations. Finally, we analyze the influence of different factors (e.g.,
the Fourier-based boundary complexity and size of the segmented objects) on
SAM's segmentation performance. Extensive experiments validate that SAM's
zero-shot segmentation capability is not sufficient to ensure its direct
application to the MIS.Comment: 23 pages, 14 figures, 12 table