Domain shift has been a long-standing issue for medical image segmentation.
Recently, unsupervised domain adaptation (UDA) methods have achieved promising
cross-modality segmentation performance by distilling knowledge from a
label-rich source domain to a target domain without labels. In this work, we
propose a multi-scale self-ensembling based UDA framework for automatic
segmentation of two key brain structures i.e., Vestibular Schwannoma (VS) and
Cochlea on high-resolution T2 images. First, a segmentation-enhanced
contrastive unpaired image translation module is designed for image-level
domain adaptation from source T1 to target T2. Next, multi-scale deep
supervision and consistency regularization are introduced to a mean teacher
network for self-ensemble learning to further close the domain gap.
Furthermore, self-training and intensity augmentation techniques are utilized
to mitigate label scarcity and boost cross-modality segmentation performance.
Our method demonstrates promising segmentation performance with a mean Dice
score of 83.8% and 81.4% and an average asymmetric surface distance (ASSD) of
0.55 mm and 0.26 mm for the VS and Cochlea, respectively in the validation
phase of the crossMoDA 2022 challenge.Comment: Accepted by BrainLes MICCAI proceedings (5th solution for MICCAI 2022
Cross-Modality Domain Adaptation (crossMoDA) Challenge