2,604 research outputs found
Learning to In-paint: Domain Adaptive Shape Completion for 3D Organ Segmentation
We aim at incorporating explicit shape information into current 3D organ
segmentation models. Different from previous works, we formulate shape learning
as an in-painting task, which is named Masked Label Mask Modeling (MLM).
Through MLM, learnable mask tokens are fed into transformer blocks to complete
the label mask of organ. To transfer MLM shape knowledge to target, we further
propose a novel shape-aware self-distillation with both in-painting
reconstruction loss and pseudo loss. Extensive experiments on five public organ
segmentation datasets show consistent improvements over prior arts with at
least 1.2 points gain in the Dice score, demonstrating the effectiveness of our
method in challenging unsupervised domain adaptation scenarios including: (1)
In-domain organ segmentation; (2) Unseen domain segmentation and (3) Unseen
organ segmentation. We hope this work will advance shape analysis and geometric
learning in medical imaging
A New Ensemble Learning Framework for 3D Biomedical Image Segmentation
3D image segmentation plays an important role in biomedical image analysis.
Many 2D and 3D deep learning models have achieved state-of-the-art segmentation
performance on 3D biomedical image datasets. Yet, 2D and 3D models have their
own strengths and weaknesses, and by unifying them together, one may be able to
achieve more accurate results. In this paper, we propose a new ensemble
learning framework for 3D biomedical image segmentation that combines the
merits of 2D and 3D models. First, we develop a fully convolutional network
based meta-learner to learn how to improve the results from 2D and 3D models
(base-learners). Then, to minimize over-fitting for our sophisticated
meta-learner, we devise a new training method that uses the results of the
base-learners as multiple versions of "ground truths". Furthermore, since our
new meta-learner training scheme does not depend on manual annotation, it can
utilize abundant unlabeled 3D image data to further improve the model.
Extensive experiments on two public datasets (the HVSMR 2016 Challenge dataset
and the mouse piriform cortex dataset) show that our approach is effective
under fully-supervised, semi-supervised, and transductive settings, and attains
superior performance over state-of-the-art image segmentation methods.Comment: To appear in AAAI-2019. The first three authors contributed equally
to the pape
Auto-Prompting SAM for Mobile Friendly 3D Medical Image Segmentation
The Segment Anything Model (SAM) has rapidly been adopted for segmenting a
wide range of natural images. However, recent studies have indicated that SAM
exhibits subpar performance on 3D medical image segmentation tasks. In addition
to the domain gaps between natural and medical images, disparities in the
spatial arrangement between 2D and 3D images, the substantial computational
burden imposed by powerful GPU servers, and the time-consuming manual prompt
generation impede the extension of SAM to a broader spectrum of medical image
segmentation applications. To address these challenges, in this work, we
introduce a novel method, AutoSAM Adapter, designed specifically for 3D
multi-organ CT-based segmentation. We employ parameter-efficient adaptation
techniques in developing an automatic prompt learning paradigm to facilitate
the transformation of the SAM model's capabilities to 3D medical image
segmentation, eliminating the need for manually generated prompts. Furthermore,
we effectively transfer the acquired knowledge of the AutoSAM Adapter to other
lightweight models specifically tailored for 3D medical image analysis,
achieving state-of-the-art (SOTA) performance on medical image segmentation
tasks. Through extensive experimental evaluation, we demonstrate the AutoSAM
Adapter as a critical foundation for effectively leveraging the emerging
ability of foundation models in 2D natural image segmentation for 3D medical
image segmentation.Comment: 9 pages, 4 figures, 4 table
APAUNet: Axis Projection Attention UNet for Small Target in 3D Medical Segmentation
In 3D medical image segmentation, small targets segmentation is crucial for
diagnosis but still faces challenges. In this paper, we propose the Axis
Projection Attention UNet, named APAUNet, for 3D medical image segmentation,
especially for small targets. Considering the large proportion of the
background in the 3D feature space, we introduce a projection strategy to
project the 3D features into three orthogonal 2D planes to capture the
contextual attention from different views. In this way, we can filter out the
redundant feature information and mitigate the loss of critical information for
small lesions in 3D scans. Then we utilize a dimension hybridization strategy
to fuse the 3D features with attention from different axes and merge them by a
weighted summation to adaptively learn the importance of different
perspectives. Finally, in the APA Decoder, we concatenate both high and low
resolution features in the 2D projection process, thereby obtaining more
precise multi-scale information, which is vital for small lesion segmentation.
Quantitative and qualitative experimental results on two public datasets (BTCV
and MSD) demonstrate that our proposed APAUNet outperforms the other methods.
Concretely, our APAUNet achieves an average dice score of 87.84 on BTCV, 84.48
on MSD-Liver and 69.13 on MSD-Pancreas, and significantly surpass the previous
SOTA methods on small targets.Comment: Accepted by ACCV202
- …