44 research outputs found
Incremental Adversarial Domain Adaptation for Continually Changing Environments
Continuous appearance shifts such as changes in weather and lighting
conditions can impact the performance of deployed machine learning models.
While unsupervised domain adaptation aims to address this challenge, current
approaches do not utilise the continuity of the occurring shifts. In
particular, many robotics applications exhibit these conditions and thus
facilitate the potential to incrementally adapt a learnt model over minor
shifts which integrate to massive differences over time. Our work presents an
adversarial approach for lifelong, incremental domain adaptation which benefits
from unsupervised alignment to a series of intermediate domains which
successively diverge from the labelled source domain. We empirically
demonstrate that our incremental approach improves handling of large appearance
changes, e.g. day to night, on a traversable-path segmentation task compared
with a direct, single alignment step approach. Furthermore, by approximating
the feature distribution for the source domain with a generative adversarial
network, the deployment module can be rendered fully independent of retaining
potentially large amounts of the related source training data for only a minor
reduction in performance.Comment: International Conference on Robotics and Automation 201
Addressing Appearance Change in Outdoor Robotics with Adversarial Domain Adaptation
Appearance changes due to weather and seasonal conditions represent a strong
impediment to the robust implementation of machine learning systems in outdoor
robotics. While supervised learning optimises a model for the training domain,
it will deliver degraded performance in application domains that underlie
distributional shifts caused by these changes. Traditionally, this problem has
been addressed via the collection of labelled data in multiple domains or by
imposing priors on the type of shift between both domains. We frame the problem
in the context of unsupervised domain adaptation and develop a framework for
applying adversarial techniques to adapt popular, state-of-the-art network
architectures with the additional objective to align features across domains.
Moreover, as adversarial training is notoriously unstable, we first perform an
extensive ablation study, adapting many techniques known to stabilise
generative adversarial networks, and evaluate on a surrogate classification
task with the same appearance change. The distilled insights are applied to the
problem of free-space segmentation for motion planning in autonomous driving.Comment: In Proceedings of the 2017 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2017
Return of Frustratingly Easy Domain Adaptation
Unlike human learning, machine learning often fails to handle changes between
training (source) and test (target) input distributions. Such domain shifts,
common in practical scenarios, severely damage the performance of conventional
machine learning methods. Supervised domain adaptation methods have been
proposed for the case when the target data have labels, including some that
perform very well despite being "frustratingly easy" to implement. However, in
practice, the target domain is often unlabeled, requiring unsupervised
adaptation. We propose a simple, effective, and efficient method for
unsupervised domain adaptation called CORrelation ALignment (CORAL). CORAL
minimizes domain shift by aligning the second-order statistics of source and
target distributions, without requiring any target labels. Even though it is
extraordinarily simple--it can be implemented in four lines of Matlab
code--CORAL performs remarkably well in extensive evaluations on standard
benchmark datasets.Comment: Fixed typos. Full paper to appear in AAAI-16. Extended Abstract of
the full paper to appear in TASK-CV 2015 worksho
Residual Parameter Transfer for Deep Domain Adaptation
The goal of Deep Domain Adaptation is to make it possible to use Deep Nets
trained in one domain where there is enough annotated training data in another
where there is little or none. Most current approaches have focused on learning
feature representations that are invariant to the changes that occur when going
from one domain to the other, which means using the same network parameters in
both domains. While some recent algorithms explicitly model the changes by
adapting the network parameters, they either severely restrict the possible
domain changes, or significantly increase the number of model parameters.
By contrast, we introduce a network architecture that includes auxiliary
residual networks, which we train to predict the parameters in the domain with
little annotated data from those in the other one. This architecture enables us
to flexibly preserve the similarities between domains where they exist and
model the differences when necessary. We demonstrate that our approach yields
higher accuracy than state-of-the-art methods without undue complexity
Domain Adaptation in LiDAR Semantic Segmentation by Aligning Class Distributions
LiDAR semantic segmentation provides 3D semantic information about the
environment, an essential cue for intelligent systems during their decision
making processes. Deep neural networks are achieving state-of-the-art results
on large public benchmarks on this task. Unfortunately, finding models that
generalize well or adapt to additional domains, where data distribution is
different, remains a major challenge. This work addresses the problem of
unsupervised domain adaptation for LiDAR semantic segmentation models. Our
approach combines novel ideas on top of the current state-of-the-art approaches
and yields new state-of-the-art results. We propose simple but effective
strategies to reduce the domain shift by aligning the data distribution on the
input space. Besides, we propose a learning-based approach that aligns the
distribution of the semantic classes of the target domain to the source domain.
The presented ablation study shows how each part contributes to the final
performance. Our strategy is shown to outperform previous approaches for domain
adaptation with comparisons run on three different domains.Comment: 7 pages, 3 figure