1,705 research outputs found
Enhancing Visual Domain Adaptation with Source Preparation
Robotic Perception in diverse domains such as low-light scenarios, where new
modalities like thermal imaging and specialized night-vision sensors are
increasingly employed, remains a challenge. Largely, this is due to the limited
availability of labeled data. Existing Domain Adaptation (DA) techniques, while
promising to leverage labels from existing well-lit RGB images, fail to
consider the characteristics of the source domain itself. We holistically
account for this factor by proposing Source Preparation (SP), a method to
mitigate source domain biases. Our Almost Unsupervised Domain Adaptation (AUDA)
framework, a label-efficient semi-supervised approach for robotic scenarios --
employs Source Preparation (SP), Unsupervised Domain Adaptation (UDA) and
Supervised Alignment (SA) from limited labeled data. We introduce
CityIntensified, a novel dataset comprising temporally aligned image pairs
captured from a high-sensitivity camera and an intensifier camera for semantic
segmentation and object detection in low-light settings. We demonstrate the
effectiveness of our method in semantic segmentation, with experiments showing
that SP enhances UDA across a range of visual domains, with improvements up to
40.64% in mIoU over baseline, while making target models more robust to
real-world shifts within the target domain. We show that AUDA is a
label-efficient framework for effective DA, significantly improving target
domain performance with only tens of labeled samples from the target domain
How To Overcome Confirmation Bias in Semi-Supervised Image Classification By Active Learning
Do we need active learning? The rise of strong deep semi-supervised methods
raises doubt about the usability of active learning in limited labeled data
settings. This is caused by results showing that combining semi-supervised
learning (SSL) methods with a random selection for labeling can outperform
existing active learning (AL) techniques. However, these results are obtained
from experiments on well-established benchmark datasets that can overestimate
the external validity. However, the literature lacks sufficient research on the
performance of active semi-supervised learning methods in realistic data
scenarios, leaving a notable gap in our understanding. Therefore we present
three data challenges common in real-world applications: between-class
imbalance, within-class imbalance, and between-class similarity. These
challenges can hurt SSL performance due to confirmation bias. We conduct
experiments with SSL and AL on simulated data challenges and find that random
sampling does not mitigate confirmation bias and, in some cases, leads to worse
performance than supervised learning. In contrast, we demonstrate that AL can
overcome confirmation bias in SSL in these realistic settings. Our results
provide insights into the potential of combining active and semi-supervised
learning in the presence of common real-world challenges, which is a promising
direction for robust methods when learning with limited labeled data in
real-world applications.Comment: Accepted @ ECML PKDD 2023. This is the author's version of the work.
The definitive Version of Record will be published in the Proceedings of ECML
PKDD 202
- …