53 research outputs found
Rethinking the Role of Pre-Trained Networks in Source-Free Domain Adaptation
Source-free domain adaptation (SFDA) aims to adapt a source model trained on
a fully-labeled source domain to an unlabeled target domain. Large-data
pre-trained networks are used to initialize source models during source
training, and subsequently discarded. However, source training can cause the
model to overfit to source data distribution and lose applicable target domain
knowledge. We propose to integrate the pre-trained network into the target
adaptation process as it has diversified features important for generalization
and provides an alternate view of features and classification decisions
different from the source model. We propose to distil useful target domain
information through a co-learning strategy to improve target pseudolabel
quality for finetuning the source model. Evaluation on 4 benchmark datasets
show that our proposed strategy improves adaptation performance and can be
successfully integrated with existing SFDA methods. Leveraging modern
pre-trained networks that have stronger representation learning ability in the
co-learning strategy further boosts performance.Comment: Accepted to ICCV 202
PseudoCal: A Source-Free Approach to Unsupervised Uncertainty Calibration in Domain Adaptation
Unsupervised domain adaptation (UDA) has witnessed remarkable advancements in
improving the accuracy of models for unlabeled target domains. However, the
calibration of predictive uncertainty in the target domain, a crucial aspect of
the safe deployment of UDA models, has received limited attention. The
conventional in-domain calibration method, \textit{temperature scaling}
(TempScal), encounters challenges due to domain distribution shifts and the
absence of labeled target domain data. Recent approaches have employed
importance-weighting techniques to estimate the target-optimal temperature
based on re-weighted labeled source data. Nonetheless, these methods require
source data and suffer from unreliable density estimates under severe domain
shifts, rendering them unsuitable for source-free UDA settings. To overcome
these limitations, we propose PseudoCal, a source-free calibration method that
exclusively relies on unlabeled target data. Unlike previous approaches that
treat UDA calibration as a \textit{covariate shift} problem, we consider it as
an unsupervised calibration problem specific to the target domain. Motivated by
the factorization of the negative log-likelihood (NLL) objective in TempScal,
we generate a labeled pseudo-target set that captures the structure of the real
target. By doing so, we transform the unsupervised calibration problem into a
supervised one, enabling us to effectively address it using widely-used
in-domain methods like TempScal. Finally, we thoroughly evaluate the
calibration performance of PseudoCal by conducting extensive experiments on 10
UDA methods, considering both traditional UDA settings and recent source-free
UDA scenarios. The experimental results consistently demonstrate the superior
performance of PseudoCal, exhibiting significantly reduced calibration error
compared to existing calibration methods
Point Discriminative Learning for Data-efficient 3D Point Cloud Analysis
3D point cloud analysis has drawn a lot of research attention due to its wide
applications. However, collecting massive labelled 3D point cloud data is both
time-consuming and labor-intensive. This calls for data-efficient learning
methods. In this work we propose PointDisc, a point discriminative learning
method to leverage self-supervisions for data-efficient 3D point cloud
classification and segmentation. PointDisc imposes a novel point discrimination
loss on the middle and global level features produced by the backbone network.
This point discrimination loss enforces learned features to be consistent with
points belonging to the corresponding local shape region and inconsistent with
randomly sampled noisy points. We conduct extensive experiments on 3D object
classification, 3D semantic and part segmentation, showing the benefits of
PointDisc for data-efficient learning. Detailed analysis demonstrate that
PointDisc learns unsupervised features that well capture local and global
geometry.Comment: This work is published in 3DV 202
- …