15 research outputs found
A Robust Negative Learning Approach to Partial Domain Adaptation Using Source Prototypes
This work proposes a robust Partial Domain Adaptation (PDA) framework that
mitigates the negative transfer problem by incorporating a robust
target-supervision strategy. It leverages ensemble learning and includes
diverse, complementary label feedback, alleviating the effect of incorrect
feedback and promoting pseudo-label refinement. Rather than relying exclusively
on first-order moments for distribution alignment, our approach offers explicit
objectives to optimize intra-class compactness and inter-class separation with
the inferred source prototypes and highly-confident target samples in a
domain-invariant fashion. Notably, we ensure source data privacy by eliminating
the need to access the source data during the adaptation phase through a priori
inference of source prototypes. We conducted a series of comprehensive
experiments, including an ablation analysis, covering a range of partial domain
adaptation tasks. Comprehensive evaluations on benchmark datasets corroborate
our framework's enhanced robustness and generalization, demonstrating its
superiority over existing state-of-the-art PDA approaches
Robust Class-Conditional Distribution Alignment for Partial Domain Adaptation
Unwanted samples from private source categories in the learning objective of
a partial domain adaptation setup can lead to negative transfer and reduce
classification performance. Existing methods, such as re-weighting or
aggregating target predictions, are vulnerable to this issue, especially during
initial training stages, and do not adequately address class-level feature
alignment. Our proposed approach seeks to overcome these limitations by delving
deeper than just the first-order moments to derive distinct and compact
categorical distributions. We employ objectives that optimize the intra and
inter-class distributions in a domain-invariant fashion and design a robust
pseudo-labeling for efficient target supervision. Our approach incorporates a
complement entropy objective module to reduce classification uncertainty and
flatten incorrect category predictions. The experimental findings and ablation
analysis of the proposed modules demonstrate the superior performance of our
proposed model compared to benchmarks