1,447 research outputs found
A Generative Adversarial Approach for Zero-Shot Learning from Noisy Texts
Most existing zero-shot learning methods consider the problem as a visual
semantic embedding one. Given the demonstrated capability of Generative
Adversarial Networks(GANs) to generate images, we instead leverage GANs to
imagine unseen categories from text descriptions and hence recognize novel
classes with no examples being seen. Specifically, we propose a simple yet
effective generative model that takes as input noisy text descriptions about an
unseen class (e.g.Wikipedia articles) and generates synthesized visual features
for this class. With added pseudo data, zero-shot learning is naturally
converted to a traditional classification problem. Additionally, to preserve
the inter-class discrimination of the generated features, a visual pivot
regularization is proposed as an explicit supervision. Unlike previous methods
using complex engineered regularizers, our approach can suppress the noise well
without additional regularization. Empirically, we show that our method
consistently outperforms the state of the art on the largest available
benchmarks on Text-based Zero-shot Learning.Comment: To appear in CVPR1
Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective
This paper takes a problem-oriented perspective and presents a comprehensive
review of transfer learning methods, both shallow and deep, for cross-dataset
visual recognition. Specifically, it categorises the cross-dataset recognition
into seventeen problems based on a set of carefully chosen data and label
attributes. Such a problem-oriented taxonomy has allowed us to examine how
different transfer learning approaches tackle each problem and how well each
problem has been researched to date. The comprehensive problem-oriented review
of the advances in transfer learning with respect to the problem has not only
revealed the challenges in transfer learning for visual recognition, but also
the problems (e.g. eight of the seventeen problems) that have been scarcely
studied. This survey not only presents an up-to-date technical review for
researchers, but also a systematic approach and a reference for a machine
learning practitioner to categorise a real problem and to look up for a
possible solution accordingly
Transductive Multi-label Zero-shot Learning
Zero-shot learning has received increasing interest as a means to alleviate
the often prohibitive expense of annotating training data for large scale
recognition problems. These methods have achieved great success via learning
intermediate semantic representations in the form of attributes and more
recently, semantic word vectors. However, they have thus far been constrained
to the single-label case, in contrast to the growing popularity and importance
of more realistic multi-label data. In this paper, for the first time, we
investigate and formalise a general framework for multi-label zero-shot
learning, addressing the unique challenge therein: how to exploit multi-label
correlation at test time with no training data for those classes? In
particular, we propose (1) a multi-output deep regression model to project an
image into a semantic word space, which explicitly exploits the correlations in
the intermediate semantic layer of word vectors; (2) a novel zero-shot learning
algorithm for multi-label data that exploits the unique compositionality
property of semantic word vector representations; and (3) a transductive
learning strategy to enable the regression model learned from seen classes to
generalise well to unseen classes. Our zero-shot learning experiments on a
number of standard multi-label datasets demonstrate that our method outperforms
a variety of baselines.Comment: 12 pages, 6 figures, Accepted to BMVC 2014 (oral
A Survey on Negative Transfer
Transfer learning (TL) tries to utilize data or knowledge from one or more
source domains to facilitate the learning in a target domain. It is
particularly useful when the target domain has few or no labeled data, due to
annotation expense, privacy concerns, etc. Unfortunately, the effectiveness of
TL is not always guaranteed. Negative transfer (NT), i.e., the source domain
data/knowledge cause reduced learning performance in the target domain, has
been a long-standing and challenging problem in TL. Various approaches to
handle NT have been proposed in the literature. However, this filed lacks a
systematic survey on the formalization of NT, their factors and the algorithms
that handle NT. This paper proposes to fill this gap. First, the definition of
negative transfer is considered and a taxonomy of the factors are discussed.
Then, near fifty representative approaches for handling NT are categorized and
reviewed, from four perspectives: secure transfer, domain similarity
estimation, distant transfer and negative transfer mitigation. NT in related
fields, e.g., multi-task learning, lifelong learning, and adversarial attacks
are also discussed
Transductive Multi-View Zero-Shot Learning
(c) 2012. The copyright of this document resides with its authors.
It may be distributed unchanged freely in print or electronic forms
- …