1,020 research outputs found
Incremental Unsupervised Domain-Adversarial Training of Neural Networks
In the context of supervised statistical learning, it is typically assumed
that the training set comes from the same distribution that draws the test
samples. When this is not the case, the behavior of the learned model is
unpredictable and becomes dependent upon the degree of similarity between the
distribution of the training set and the distribution of the test set. One of
the research topics that investigates this scenario is referred to as domain
adaptation. Deep neural networks brought dramatic advances in pattern
recognition and that is why there have been many attempts to provide good
domain adaptation algorithms for these models. Here we take a different avenue
and approach the problem from an incremental point of view, where the model is
adapted to the new domain iteratively. We make use of an existing unsupervised
domain-adaptation algorithm to identify the target samples on which there is
greater confidence about their true label. The output of the model is analyzed
in different ways to determine the candidate samples. The selected set is then
added to the source training set by considering the labels provided by the
network as ground truth, and the process is repeated until all target samples
are labelled. Our results report a clear improvement with respect to the
non-incremental case in several datasets, also outperforming other
state-of-the-art domain adaptation algorithms.Comment: 26 pages, 7 figure
A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts
Machine learning methods strive to acquire a robust model during training
that can generalize well to test samples, even under distribution shifts.
However, these methods often suffer from a performance drop due to unknown test
distributions. Test-time adaptation (TTA), an emerging paradigm, has the
potential to adapt a pre-trained model to unlabeled data during testing, before
making predictions. Recent progress in this paradigm highlights the significant
benefits of utilizing unlabeled data for training self-adapted models prior to
inference. In this survey, we divide TTA into several distinct categories,
namely, test-time (source-free) domain adaptation, test-time batch adaptation,
online test-time adaptation, and test-time prior adaptation. For each category,
we provide a comprehensive taxonomy of advanced algorithms, followed by a
discussion of different learning scenarios. Furthermore, we analyze relevant
applications of TTA and discuss open challenges and promising areas for future
research. A comprehensive list of TTA methods can be found at
\url{https://github.com/tim-learn/awesome-test-time-adaptation}.Comment: Discussions, comments, and questions are all welcomed in
\url{https://github.com/tim-learn/awesome-test-time-adaptation
Recommended from our members
Deep learning for cardiac image segmentation: A review
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US) and major anatomical structures of interest (ventricles, atria and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research
- …