14,127 research outputs found
OneSeg: Self-learning and One-shot Learning based Single-slice Annotation for 3D Medical Image Segmentation
As deep learning methods continue to improve medical image segmentation
performance, data annotation is still a big bottleneck due to the
labor-intensive and time-consuming burden on medical experts, especially for 3D
images. To significantly reduce annotation efforts while attaining competitive
segmentation accuracy, we propose a self-learning and one-shot learning based
framework for 3D medical image segmentation by annotating only one slice of
each 3D image. Our approach takes two steps: (1) self-learning of a
reconstruction network to learn semantic correspondence among 2D slices within
3D images, and (2) representative selection of single slices for one-shot
manual annotation and propagating the annotated data with the well-trained
reconstruction network. Extensive experiments verify that our new framework
achieves comparable performance with less than 1% annotated data compared with
fully supervised methods and generalizes well on several out-of-distribution
testing sets
Semi-Supervised Medical Image Segmentation with Co-Distribution Alignment
Medical image segmentation has made significant progress when a large amount
of labeled data are available. However, annotating medical image segmentation
datasets is expensive due to the requirement of professional skills.
Additionally, classes are often unevenly distributed in medical images, which
severely affects the classification performance on minority classes. To address
these problems, this paper proposes Co-Distribution Alignment (Co-DA) for
semi-supervised medical image segmentation. Specifically, Co-DA aligns marginal
predictions on unlabeled data to marginal predictions on labeled data in a
class-wise manner with two differently initialized models before using the
pseudo-labels generated by one model to supervise the other. Besides, we design
an over-expectation cross-entropy loss for filtering the unlabeled pixels to
reduce noise in their pseudo-labels. Quantitative and qualitative experiments
on three public datasets demonstrate that the proposed approach outperforms
existing state-of-the-art semi-supervised medical image segmentation methods on
both the 2D CaDIS dataset and the 3D LGE-MRI and ACDC datasets, achieving an
mIoU of 0.8515 with only 24% labeled data on CaDIS, and a Dice score of 0.8824
and 0.8773 with only 20% data on LGE-MRI and ACDC, respectively.Comment: Paper appears in Bioengineering 2023, 10(7), 86
- …