11,662 research outputs found
AdaptGuard: Defending Against Universal Attacks for Model Adaptation
Model adaptation aims at solving the domain transfer problem under the
constraint of only accessing the pretrained source models. With the increasing
considerations of data privacy and transmission efficiency, this paradigm has
been gaining recent popularity. This paper studies the vulnerability to
universal attacks transferred from the source domain during model adaptation
algorithms due to the existence of malicious providers. We explore both
universal adversarial perturbations and backdoor attacks as loopholes on the
source side and discover that they still survive in the target models after
adaptation. To address this issue, we propose a model preprocessing framework,
named AdaptGuard, to improve the security of model adaptation algorithms.
AdaptGuard avoids direct use of the risky source parameters through knowledge
distillation and utilizes the pseudo adversarial samples under adjusted radius
to enhance the robustness. AdaptGuard is a plug-and-play module that requires
neither robust pretrained models nor any changes for the following model
adaptation algorithms. Extensive results on three commonly used datasets and
two popular adaptation methods validate that AdaptGuard can effectively defend
against universal attacks and maintain clean accuracy in the target domain
simultaneously. We hope this research will shed light on the safety and
robustness of transfer learning. Code is available at
https://github.com/TomSheng21/AdaptGuard.Comment: ICCV202
Class reconstruction driven adversarial domain adaptation for hyperspectral image classification
We address the problem of cross-domain classification of hyperspectral image (HSI) pairs under the notion of unsupervised domain adaptation (UDA). The UDA problem aims at classifying the test samples of a target domain by exploiting the labeled training samples from a related but different source domain. In this respect, the use of adversarial training driven domain classifiers is popular which seeks to learn a shared feature space for both the domains. However, such a formalism apparently fails to ensure the (i) discriminativeness, and (ii) non-redundancy of the learned space. In general, the feature space learned by domain classifier does not convey any meaningful insight regarding the data. On the other hand, we are interested in constraining the space which is deemed to be simultaneously discriminative and reconstructive at the class-scale. In particular, the reconstructive constraint enables the learning of category-specific meaningful feature abstractions and UDA in such a latent space is expected to better associate the domains. On the other hand, we consider an orthogonality constraint to ensure non-redundancy of the learned space. Experimental results obtained on benchmark HSI datasets (Botswana and Pavia) confirm the efficacy of the proposal approach
Dual adversarial models with cross-coordination consistency constraint for domain adaption in brain tumor segmentation
The brain tumor segmentation task with different domains remains a major challenge because tumors of different grades and severities may show different distributions, limiting the ability of a single segmentation model to label such tumors. Semi-supervised models (e.g., mean teacher) are strong unsupervised domain-adaptation learners. However, one of the main drawbacks of using a mean teacher is that given a large number of iterations, the teacher model weights converge to those of the student model, and any biased and unstable predictions are carried over to the student. In this article, we proposed a novel unsupervised domain-adaptation framework for the brain tumor segmentation task, which uses dual student and adversarial training techniques to effectively tackle domain shift with MR images. In this study, the adversarial strategy and consistency constraint for each student can align the feature representation on the source and target domains. Furthermore, we introduced the cross-coordination constraint for the target domain data to constrain the models to produce more confident predictions. We validated our framework on the cross-subtype and cross-modality tasks in brain tumor segmentation and achieved better performance than the current unsupervised domain-adaptation and semi-supervised frameworks
Source-Relaxed Domain Adaptation for Image Segmentation
Domain adaptation (DA) has drawn high interests for its capacity to adapt a
model trained on labeled source data to perform well on unlabeled or weakly
labeled target data from a different domain. Most common DA techniques require
the concurrent access to the input images of both the source and target
domains. However, in practice, it is common that the source images are not
available in the adaptation phase. This is a very frequent DA scenario in
medical imaging, for instance, when the source and target images come from
different clinical sites. We propose a novel formulation for adapting
segmentation networks, which relaxes such a constraint. Our formulation is
based on minimizing a label-free entropy loss defined over target-domain data,
which we further guide with a domain invariant prior on the segmentation
regions. Many priors can be used, derived from anatomical information. Here, a
class-ratio prior is learned via an auxiliary network and integrated in the
form of a Kullback-Leibler (KL) divergence in our overall loss function. We
show the effectiveness of our prior-aware entropy minimization in adapting
spine segmentation across different MRI modalities. Our method yields
comparable results to several state-of-the-art adaptation techniques, even
though is has access to less information, the source images being absent in the
adaptation phase. Our straight-forward adaptation strategy only uses one
network, contrary to popular adversarial techniques, which cannot perform
without the presence of the source images. Our framework can be readily used
with various priors and segmentation problems
- …