9,955 research outputs found
Modulating Image Restoration with Continual Levels via Adaptive Feature Modification Layers
In image restoration tasks, like denoising and super resolution, continual
modulation of restoration levels is of great importance for real-world
applications, but has failed most of existing deep learning based image
restoration methods. Learning from discrete and fixed restoration levels, deep
models cannot be easily generalized to data of continuous and unseen levels.
This topic is rarely touched in literature, due to the difficulty of modulating
well-trained models with certain hyper-parameters. We make a step forward by
proposing a unified CNN framework that consists of few additional parameters
than a single-level model yet could handle arbitrary restoration levels between
a start and an end level. The additional module, namely AdaFM layer, performs
channel-wise feature modification, and can adapt a model to another restoration
level with high accuracy. By simply tweaking an interpolation coefficient, the
intermediate model - AdaFM-Net could generate smooth and continuous restoration
effects without artifacts. Extensive experiments on three image restoration
tasks demonstrate the effectiveness of both model training and modulation
testing. Besides, we carefully investigate the properties of AdaFM layers,
providing a detailed guidance on the usage of the proposed method.Comment: Accepted by CVPR 2019 (oral); code is available:
https://github.com/hejingwenhejingwen/AdaF
Domain Adaptive Transfer Learning for Fault Diagnosis
Thanks to digitization of industrial assets in fleets, the ambitious goal of
transferring fault diagnosis models fromone machine to the other has raised
great interest. Solving these domain adaptive transfer learning tasks has the
potential to save large efforts on manually labeling data and modifying models
for new machines in the same fleet. Although data-driven methods have shown
great potential in fault diagnosis applications, their ability to generalize on
new machines and new working conditions are limited because of their tendency
to overfit to the training set in reality. One promising solution to this
problem is to use domain adaptation techniques. It aims to improve model
performance on the target new machine. Inspired by its successful
implementation in computer vision, we introduced Domain-Adversarial Neural
Networks (DANN) to our context, along with two other popular methods existing
in previous fault diagnosis research. We then carefully justify the
applicability of these methods in realistic fault diagnosis settings, and offer
a unified experimental protocol for a fair comparison between domain adaptation
methods for fault diagnosis problems.Comment: Presented at 2019 Prognostics and System Health Management Conference
(PHM 2019) in Paris, Franc
AdaGraph: Unifying Predictive and Continuous Domain Adaptation through Graphs
The ability to categorize is a cornerstone of visual intelligence, and a key
functionality for artificial, autonomous visual machines. This problem will
never be solved without algorithms able to adapt and generalize across visual
domains. Within the context of domain adaptation and generalization, this paper
focuses on the predictive domain adaptation scenario, namely the case where no
target data are available and the system has to learn to generalize from
annotated source images plus unlabeled samples with associated metadata from
auxiliary domains. Our contributionis the first deep architecture that tackles
predictive domainadaptation, able to leverage over the information broughtby
the auxiliary domains through a graph. Moreover, we present a simple yet
effective strategy that allows us to take advantage of the incoming target data
at test time, in a continuous domain adaptation scenario. Experiments on three
benchmark databases support the value of our approach.Comment: CVPR 2019 (oral
Return of Frustratingly Easy Domain Adaptation
Unlike human learning, machine learning often fails to handle changes between
training (source) and test (target) input distributions. Such domain shifts,
common in practical scenarios, severely damage the performance of conventional
machine learning methods. Supervised domain adaptation methods have been
proposed for the case when the target data have labels, including some that
perform very well despite being "frustratingly easy" to implement. However, in
practice, the target domain is often unlabeled, requiring unsupervised
adaptation. We propose a simple, effective, and efficient method for
unsupervised domain adaptation called CORrelation ALignment (CORAL). CORAL
minimizes domain shift by aligning the second-order statistics of source and
target distributions, without requiring any target labels. Even though it is
extraordinarily simple--it can be implemented in four lines of Matlab
code--CORAL performs remarkably well in extensive evaluations on standard
benchmark datasets.Comment: Fixed typos. Full paper to appear in AAAI-16. Extended Abstract of
the full paper to appear in TASK-CV 2015 worksho
- …