1,514 research outputs found
Zero-Annotation Object Detection with Web Knowledge Transfer
Object detection is one of the major problems in computer vision, and has
been extensively studied. Most of the existing detection works rely on
labor-intensive supervision, such as ground truth bounding boxes of objects or
at least image-level annotations. On the contrary, we propose an object
detection method that does not require any form of human annotation on target
tasks, by exploiting freely available web images. In order to facilitate
effective knowledge transfer from web images, we introduce a multi-instance
multi-label domain adaption learning framework with two key innovations. First
of all, we propose an instance-level adversarial domain adaptation network with
attention on foreground objects to transfer the object appearances from web
domain to target domain. Second, to preserve the class-specific semantic
structure of transferred object features, we propose a simultaneous transfer
mechanism to transfer the supervision across domains through pseudo strong
label generation. With our end-to-end framework that simultaneously learns a
weakly supervised detector and transfers knowledge across domains, we achieved
significant improvements over baseline methods on the benchmark datasets.Comment: Accepted in ECCV 201
Co-regularized Alignment for Unsupervised Domain Adaptation
Deep neural networks, trained with large amount of labeled data, can fail to
generalize well when tested with examples from a \emph{target domain} whose
distribution differs from the training data distribution, referred as the
\emph{source domain}. It can be expensive or even infeasible to obtain required
amount of labeled data in all possible domains. Unsupervised domain adaptation
sets out to address this problem, aiming to learn a good predictive model for
the target domain using labeled examples from the source domain but only
unlabeled examples from the target domain. Domain alignment approaches this
problem by matching the source and target feature distributions, and has been
used as a key component in many state-of-the-art domain adaptation methods.
However, matching the marginal feature distributions does not guarantee that
the corresponding class conditional distributions will be aligned across the
two domains. We propose co-regularized domain alignment for unsupervised domain
adaptation, which constructs multiple diverse feature spaces and aligns source
and target distributions in each of them individually, while encouraging that
alignments agree with each other with regard to the class predictions on the
unlabeled target examples. The proposed method is generic and can be used to
improve any domain adaptation method which uses domain alignment. We
instantiate it in the context of a recent state-of-the-art method and observe
that it provides significant performance improvements on several domain
adaptation benchmarks.Comment: NIPS 2018 accepted versio
Addressing Dataset Bias in Deep Neural Networks
Deep Learning has achieved tremendous success in recent years in several areas such as image classification, text translation, autonomous agents, to name a few. Deep Neural Networks are able to learn non-linear features in a data-driven fashion from complex, large scale datasets to solve tasks. However, some fundamental issues remain to be fixed: the kind of data that is provided to the neural network directly influences its capability to generalize. This is especially true when training and test data come from different distributions (the so called domain gap or domain shift problem): in this case, the neural network may learn a data representation that is representative for the training data but not for the test, thus performing poorly when deployed in actual scenarios. The domain gap problem is addressed by the so-called Domain Adaptation, for which a large literature was recently developed.
In this thesis, we first present a novel method to perform Unsupervised Domain Adaptation. Starting from the typical scenario in which we dispose of labeled source distributions and an unlabeled target distribution, we pursue a pseudo-labeling approach to assign a label to the target data, and then, in an iterative way, we refine them using Generative Adversarial Networks.
Subsequently, we faced the debiasing problem. Simply speaking, bias occurs when there are factors in the data which are spuriously correlated with the task label, e.g., the background, which might be a strong clue to guess what class is depicted in an image. When this happens, neural networks may erroneously learn such spurious correlations as predictive factors, and may therefore fail when deployed on different scenarios. Learning a debiased model can be done using supervision regarding the type of bias affecting the data, or can be done without any annotation about what are the spurious correlations.
We tackled the problem of supervised debiasing -- where a ground truth annotation for the bias is given -- under the lens of information theory.
We designed a neural network architecture that learns to solve the task while achieving at the same time, statistical independence of the data embedding with respect to the bias label.
We finally addressed the unsupervised debiasing problem, in which there is no availability of bias annotation. we address this challenging problem by a two-stage approach: we first split coarsely the training dataset into two subsets, samples that exhibit spurious correlations and those that do not. Second, we learn a feature representation that can accommodate both subsets and an augmented version of them
- …