2,349 research outputs found
A Domain Agnostic Normalization Layer for Unsupervised Adversarial Domain Adaptation
We propose a normalization layer for unsupervised domain adaption in semantic
scene segmentation. Normalization layers are known to improve convergence and
generalization and are part of many state-of-the-art fully-convolutional neural
networks. We show that conventional normalization layers worsen the performance
of current Unsupervised Adversarial Domain Adaption (UADA), which is a method
to improve network performance on unlabeled datasets and the focus of our
research. Therefore, we propose a novel Domain Agnostic Normalization layer and
thereby unlock the benefits of normalization layers for unsupervised
adversarial domain adaptation. In our evaluation, we adapt from the synthetic
GTA5 data set to the real Cityscapes data set, a common benchmark experiment,
and surpass the state-of-the-art. As our normalization layer is domain agnostic
at test time, we furthermore demonstrate that UADA using Domain Agnostic
Normalization improves performance on unseen domains, specifically on
Apolloscape and Mapillary
Domain Adaptive Transfer Learning for Fault Diagnosis
Thanks to digitization of industrial assets in fleets, the ambitious goal of
transferring fault diagnosis models fromone machine to the other has raised
great interest. Solving these domain adaptive transfer learning tasks has the
potential to save large efforts on manually labeling data and modifying models
for new machines in the same fleet. Although data-driven methods have shown
great potential in fault diagnosis applications, their ability to generalize on
new machines and new working conditions are limited because of their tendency
to overfit to the training set in reality. One promising solution to this
problem is to use domain adaptation techniques. It aims to improve model
performance on the target new machine. Inspired by its successful
implementation in computer vision, we introduced Domain-Adversarial Neural
Networks (DANN) to our context, along with two other popular methods existing
in previous fault diagnosis research. We then carefully justify the
applicability of these methods in realistic fault diagnosis settings, and offer
a unified experimental protocol for a fair comparison between domain adaptation
methods for fault diagnosis problems.Comment: Presented at 2019 Prognostics and System Health Management Conference
(PHM 2019) in Paris, Franc
Modulating Image Restoration with Continual Levels via Adaptive Feature Modification Layers
In image restoration tasks, like denoising and super resolution, continual
modulation of restoration levels is of great importance for real-world
applications, but has failed most of existing deep learning based image
restoration methods. Learning from discrete and fixed restoration levels, deep
models cannot be easily generalized to data of continuous and unseen levels.
This topic is rarely touched in literature, due to the difficulty of modulating
well-trained models with certain hyper-parameters. We make a step forward by
proposing a unified CNN framework that consists of few additional parameters
than a single-level model yet could handle arbitrary restoration levels between
a start and an end level. The additional module, namely AdaFM layer, performs
channel-wise feature modification, and can adapt a model to another restoration
level with high accuracy. By simply tweaking an interpolation coefficient, the
intermediate model - AdaFM-Net could generate smooth and continuous restoration
effects without artifacts. Extensive experiments on three image restoration
tasks demonstrate the effectiveness of both model training and modulation
testing. Besides, we carefully investigate the properties of AdaFM layers,
providing a detailed guidance on the usage of the proposed method.Comment: Accepted by CVPR 2019 (oral); code is available:
https://github.com/hejingwenhejingwen/AdaF
Kitting in the Wild through Online Domain Adaptation
Technological developments call for increasing perception and action capabilities of robots. Among other skills, vision systems that can adapt to any possible change in the working conditions are needed. Since these conditions are unpredictable, we need benchmarks which allow to assess the generalization and robustness capabilities of our visual recognition algorithms. In this work we focus on robotic kitting in unconstrained scenarios. As a first contribution, we present a new visual dataset for the kitting task. Differently from standard object recognition datasets, we provide images of the same objects acquired under various conditions where camera, illumination and background are changed. This novel dataset allows for testing the robustness of robot visual recognition algorithms to a series of different domain shifts both in isolation and unified. Our second contribution is a novel online adaptation algorithm for deep models, based on batch-normalization layers, which allows to continuously adapt a model to the current working conditions. Differently from standard domain adaptation algorithms, it does not require any image from the target domain at training time. We benchmark the performance of the algorithm on the proposed dataset, showing its capability to fill the gap between the performances of a standard architecture and its counterpart adapted offline to the given target domain
- …