1,273 research outputs found
Zero Shot Learning with the Isoperimetric Loss
We introduce the isoperimetric loss as a regularization criterion for
learning the map from a visual representation to a semantic embedding, to be
used to transfer knowledge to unknown classes in a zero-shot learning setting.
We use a pre-trained deep neural network model as a visual representation of
image data, a Word2Vec embedding of class labels, and linear maps between the
visual and semantic embedding spaces. However, the spaces themselves are not
linear, and we postulate the sample embedding to be populated by noisy samples
near otherwise smooth manifolds. We exploit the graph structure defined by the
sample points to regularize the estimates of the manifolds by inferring the
graph connectivity using a generalization of the isoperimetric inequalities
from Riemannian geometry to graphs. Surprisingly, this regularization alone,
paired with the simplest baseline model, outperforms the state-of-the-art among
fully automated methods in zero-shot learning benchmarks such as AwA and CUB.
This improvement is achieved solely by learning the structure of the underlying
spaces by imposing regularity.Comment: Accepted to AAAI-2
f-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning
When labeled training data is scarce, a promising data augmentation approach
is to generate visual features of unknown classes using their attributes. To
learn the class conditional distribution of CNN features, these models rely on
pairs of image features and class attributes. Hence, they can not make use of
the abundance of unlabeled data samples. In this paper, we tackle any-shot
learning problems i.e. zero-shot and few-shot, in a unified feature generating
framework that operates in both inductive and transductive learning settings.
We develop a conditional generative model that combines the strength of VAE and
GANs and in addition, via an unconditional discriminator, learns the marginal
feature distribution of unlabeled images. We empirically show that our model
learns highly discriminative CNN features for five datasets, i.e. CUB, SUN, AWA
and ImageNet, and establish a new state-of-the-art in any-shot learning, i.e.
inductive and transductive (generalized) zero- and few-shot learning settings.
We also demonstrate that our learned features are interpretable: we visualize
them by inverting them back to the pixel space and we explain them by
generating textual arguments of why they are associated with a certain label.Comment: Accepted at CVPR 201
Learning Hypergraph-regularized Attribute Predictors
We present a novel attribute learning framework named Hypergraph-based
Attribute Predictor (HAP). In HAP, a hypergraph is leveraged to depict the
attribute relations in the data. Then the attribute prediction problem is
casted as a regularized hypergraph cut problem in which HAP jointly learns a
collection of attribute projections from the feature space to a hypergraph
embedding space aligned with the attribute space. The learned projections
directly act as attribute classifiers (linear and kernelized). This formulation
leads to a very efficient approach. By considering our model as a multi-graph
cut task, our framework can flexibly incorporate other available information,
in particular class label. We apply our approach to attribute prediction,
Zero-shot and -shot learning tasks. The results on AWA, USAA and CUB
databases demonstrate the value of our methods in comparison with the
state-of-the-art approaches.Comment: This is an attribute learning paper accepted by CVPR 201
Zero-Shot Learning -- A Comprehensive Evaluation of the Good, the Bad and the Ugly
Due to the importance of zero-shot learning, i.e. classifying images where
there is a lack of labeled training data, the number of proposed approaches has
recently increased steadily. We argue that it is time to take a step back and
to analyze the status quo of the area. The purpose of this paper is three-fold.
First, given the fact that there is no agreed upon zero-shot learning
benchmark, we first define a new benchmark by unifying both the evaluation
protocols and data splits of publicly available datasets used for this task.
This is an important contribution as published results are often not comparable
and sometimes even flawed due to, e.g. pre-training on zero-shot test classes.
Moreover, we propose a new zero-shot learning dataset, the Animals with
Attributes 2 (AWA2) dataset which we make publicly available both in terms of
image features and the images themselves. Second, we compare and analyze a
significant number of the state-of-the-art methods in depth, both in the
classic zero-shot setting but also in the more realistic generalized zero-shot
setting. Finally, we discuss in detail the limitations of the current status of
the area which can be taken as a basis for advancing it.Comment: Accepted by TPAMI in July, 2018. We introduce Proposed Split Version
2.0 (Please download it from our project webpage). arXiv admin note:
substantial text overlap with arXiv:1703.0439
Structure propagation for zero-shot learning
The key of zero-shot learning (ZSL) is how to find the information transfer
model for bridging the gap between images and semantic information (texts or
attributes). Existing ZSL methods usually construct the compatibility function
between images and class labels with the consideration of the relevance on the
semantic classes (the manifold structure of semantic classes). However, the
relationship of image classes (the manifold structure of image classes) is also
very important for the compatibility model construction. It is difficult to
capture the relationship among image classes due to unseen classes, so that the
manifold structure of image classes often is ignored in ZSL. To complement each
other between the manifold structure of image classes and that of semantic
classes information, we propose structure propagation (SP) for improving the
performance of ZSL for classification. SP can jointly consider the manifold
structure of image classes and that of semantic classes for approximating to
the intrinsic structure of object classes. Moreover, the SP can describe the
constrain condition between the compatibility function and these manifold
structures for balancing the influence of the structure propagation iteration.
The SP solution provides not only unseen class labels but also the relationship
of two manifold structures that encode the positive transfer in structure
propagation. Experimental results demonstrate that SP can attain the promising
results on the AwA, CUB, Dogs and SUN databases
Zero-Shot Recognition using Dual Visual-Semantic Mapping Paths
Zero-shot recognition aims to accurately recognize objects of unseen classes
by using a shared visual-semantic mapping between the image feature space and
the semantic embedding space. This mapping is learned on training data of seen
classes and is expected to have transfer ability to unseen classes. In this
paper, we tackle this problem by exploiting the intrinsic relationship between
the semantic space manifold and the transfer ability of visual-semantic
mapping. We formalize their connection and cast zero-shot recognition as a
joint optimization problem. Motivated by this, we propose a novel framework for
zero-shot recognition, which contains dual visual-semantic mapping paths. Our
analysis shows this framework can not only apply prior semantic knowledge to
infer underlying semantic manifold in the image feature space, but also
generate optimized semantic embedding space, which can enhance the transfer
ability of the visual-semantic mapping to unseen classes. The proposed method
is evaluated for zero-shot recognition on four benchmark datasets, achieving
outstanding results.Comment: Accepted as a full paper in IEEE Computer Vision and Pattern
Recognition (CVPR) 201
Multi-Target Prediction: A Unifying View on Problems and Methods
Multi-target prediction (MTP) is concerned with the simultaneous prediction
of multiple target variables of diverse type. Due to its enormous application
potential, it has developed into an active and rapidly expanding research field
that combines several subfields of machine learning, including multivariate
regression, multi-label classification, multi-task learning, dyadic prediction,
zero-shot learning, network inference, and matrix completion. In this paper, we
present a unifying view on MTP problems and methods. First, we formally discuss
commonalities and differences between existing MTP problems. To this end, we
introduce a general framework that covers the above subfields as special cases.
As a second contribution, we provide a structured overview of MTP methods. This
is accomplished by identifying a number of key properties, which distinguish
such methods and determine their suitability for different types of problems.
Finally, we also discuss a few challenges for future research
- …