13,308 research outputs found
REALM: Robust Entropy Adaptive Loss Minimization for Improved Single-Sample Test-Time Adaptation
Fully-test-time adaptation (F-TTA) can mitigate performance loss due to
distribution shifts between train and test data (1) without access to the
training data, and (2) without knowledge of the model training procedure. In
online F-TTA, a pre-trained model is adapted using a stream of test samples by
minimizing a self-supervised objective, such as entropy minimization. However,
models adapted with online using entropy minimization, are unstable especially
in single sample settings, leading to degenerate solutions, and limiting the
adoption of TTA inference strategies. Prior works identify noisy, or
unreliable, samples as a cause of failure in online F-TTA. One solution is to
ignore these samples, which can lead to bias in the update procedure, slow
adaptation, and poor generalization. In this work, we present a general
framework for improving robustness of F-TTA to these noisy samples, inspired by
self-paced learning and robust loss functions. Our proposed approach, Robust
Entropy Adaptive Loss Minimization (REALM), achieves better adaptation accuracy
than previous approaches throughout the adaptation process on corruptions of
CIFAR-10 and ImageNet-1K, demonstrating its effectiveness.Comment: Accepted at WACV 2024, 17 pages, 7 figures, 11 table
Partial Transfer Learning with Selective Adversarial Networks
Adversarial learning has been successfully embedded into deep networks to
learn transferable features, which reduce distribution discrepancy between the
source and target domains. Existing domain adversarial networks assume fully
shared label space across domains. In the presence of big data, there is strong
motivation of transferring both classification and representation models from
existing big domains to unknown small domains. This paper introduces partial
transfer learning, which relaxes the shared label space assumption to that the
target label space is only a subspace of the source label space. Previous
methods typically match the whole source domain to the target domain, which are
prone to negative transfer for the partial transfer problem. We present
Selective Adversarial Network (SAN), which simultaneously circumvents negative
transfer by selecting out the outlier source classes and promotes positive
transfer by maximally matching the data distributions in the shared label
space. Experiments demonstrate that our models exceed state-of-the-art results
for partial transfer learning tasks on several benchmark datasets
- …