6,871 research outputs found
Image to Image Translation for Domain Adaptation
We propose a general framework for unsupervised domain adaptation, which
allows deep neural networks trained on a source domain to be tested on a
different target domain without requiring any training annotations in the
target domain. This is achieved by adding extra networks and losses that help
regularize the features extracted by the backbone encoder network. To this end
we propose the novel use of the recently proposed unpaired image-toimage
translation framework to constrain the features extracted by the encoder
network. Specifically, we require that the features extracted are able to
reconstruct the images in both domains. In addition we require that the
distribution of features extracted from images in the two domains are
indistinguishable. Many recent works can be seen as specific cases of our
general framework. We apply our method for domain adaptation between MNIST,
USPS, and SVHN datasets, and Amazon, Webcam and DSLR Office datasets in
classification tasks, and also between GTA5 and Cityscapes datasets for a
segmentation task. We demonstrate state of the art performance on each of these
datasets
Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets
We are concerned with the vulnerability of computer vision models to
distributional shifts. We formulate a combinatorial optimization problem that
allows evaluating the regions in the image space where a given model is more
vulnerable, in terms of image transformations applied to the input, and face it
with standard search algorithms. We further embed this idea in a training
procedure, where we define new data augmentation rules according to the image
transformations that the current model is most vulnerable to, over iterations.
An empirical evaluation on classification and semantic segmentation problems
suggests that the devised algorithm allows to train models that are more robust
against content-preserving image manipulations and, in general, against
distributional shifts.Comment: ICCV 2019 (camera ready
- …