5,686 research outputs found
ProSFDA: Prompt Learning based Source-free Domain Adaptation for Medical Image Segmentation
The domain discrepancy existed between medical images acquired in different
situations renders a major hurdle in deploying pre-trained medical image
segmentation models for clinical use. Since it is less possible to distribute
training data with the pre-trained model due to the huge data size and privacy
concern, source-free unsupervised domain adaptation (SFDA) has recently been
increasingly studied based on either pseudo labels or prior knowledge. However,
the image features and probability maps used by pseudo label-based SFDA and the
consistent prior assumption and the prior prediction network used by
prior-guided SFDA may become less reliable when the domain discrepancy is
large. In this paper, we propose a \textbf{Pro}mpt learning based \textbf{SFDA}
(\textbf{ProSFDA}) method for medical image segmentation, which aims to improve
the quality of domain adaption by minimizing explicitly the domain discrepancy.
Specifically, in the prompt learning stage, we estimate source-domain images
via adding a domain-aware prompt to target-domain images, then optimize the
prompt via minimizing the statistic alignment loss, and thereby prompt the
source model to generate reliable predictions on (altered) target-domain
images. In the feature alignment stage, we also align the features of
target-domain images and their styles-augmented counterparts to optimize the
source model, and hence push the model to extract compact features. We evaluate
our ProSFDA on two multi-domain medical image segmentation benchmarks. Our
results indicate that the proposed ProSFDA outperforms substantially other SFDA
methods and is even comparable to UDA methods. Code will be available at
\url{https://github.com/ShishuaiHu/ProSFDA}
Making the Best of Both Worlds: A Domain-Oriented Transformer for Unsupervised Domain Adaptation
Extensive studies on Unsupervised Domain Adaptation (UDA) have propelled the
deployment of deep learning from limited experimental datasets into real-world
unconstrained domains. Most UDA approaches align features within a common
embedding space and apply a shared classifier for target prediction. However,
since a perfectly aligned feature space may not exist when the domain
discrepancy is large, these methods suffer from two limitations. First, the
coercive domain alignment deteriorates target domain discriminability due to
lacking target label supervision. Second, the source-supervised classifier is
inevitably biased to source data, thus it may underperform in target domain. To
alleviate these issues, we propose to simultaneously conduct feature alignment
in two individual spaces focusing on different domains, and create for each
space a domain-oriented classifier tailored specifically for that domain.
Specifically, we design a Domain-Oriented Transformer (DOT) that has two
individual classification tokens to learn different domain-oriented
representations, and two classifiers to preserve domain-wise discriminability.
Theoretical guaranteed contrastive-based alignment and the source-guided
pseudo-label refinement strategy are utilized to explore both domain-invariant
and specific information. Comprehensive experiments validate that our method
achieves state-of-the-art on several benchmarks.Comment: Accepted at ACMMM 202
MiniMax Entropy Network: Learning Category-Invariant Features for Domain Adaptation
How to effectively learn from unlabeled data from the target domain is
crucial for domain adaptation, as it helps reduce the large performance gap due
to domain shift or distribution change. In this paper, we propose an
easy-to-implement method dubbed MiniMax Entropy Networks (MMEN) based on
adversarial learning. Unlike most existing approaches which employ a generator
to deal with domain difference, MMEN focuses on learning the categorical
information from unlabeled target samples with the help of labeled source
samples. Specifically, we set an unfair multi-class classifier named
categorical discriminator, which classifies source samples accurately but be
confused about the categories of target samples. The generator learns a common
subspace that aligns the unlabeled samples based on the target pseudo-labels.
For MMEN, we also provide theoretical explanations to show that the learning of
feature alignment reduces domain mismatch at the category level. Experimental
results on various benchmark datasets demonstrate the effectiveness of our
method over existing state-of-the-art baselines.Comment: 8 pages, 6 figure
- …