5,416 research outputs found
Semantic Graph for Zero-Shot Learning
Zero-shot learning aims to classify visual objects without any training data
via knowledge transfer between seen and unseen classes. This is typically
achieved by exploring a semantic embedding space where the seen and unseen
classes can be related. Previous works differ in what embedding space is used
and how different classes and a test image can be related. In this paper, we
utilize the annotation-free semantic word space for the former and focus on
solving the latter issue of modeling relatedness. Specifically, in contrast to
previous work which ignores the semantic relationships between seen classes and
focus merely on those between seen and unseen classes, in this paper a novel
approach based on a semantic graph is proposed to represent the relationships
between all the seen and unseen class in a semantic word space. Based on this
semantic graph, we design a special absorbing Markov chain process, in which
each unseen class is viewed as an absorbing state. After incorporating one test
image into the semantic graph, the absorbing probabilities from the test data
to each unseen class can be effectively computed; and zero-shot classification
can be achieved by finding the class label with the highest absorbing
probability. The proposed model has a closed-form solution which is linear with
respect to the number of test images. We demonstrate the effectiveness and
computational efficiency of the proposed method over the state-of-the-arts on
the AwA (animals with attributes) dataset.Comment: 9 pages, 5 figure
Semantic Autoencoder for Zero-Shot Learning
Existing zero-shot learning (ZSL) models typically learn a projection
function from a feature space to a semantic embedding space (e.g.~attribute
space). However, such a projection function is only concerned with predicting
the training seen class semantic representation (e.g.~attribute prediction) or
classification. When applied to test data, which in the context of ZSL contains
different (unseen) classes without training data, a ZSL model typically suffers
from the project domain shift problem. In this work, we present a novel
solution to ZSL based on learning a Semantic AutoEncoder (SAE). Taking the
encoder-decoder paradigm, an encoder aims to project a visual feature vector
into the semantic space as in the existing ZSL models. However, the decoder
exerts an additional constraint, that is, the projection/code must be able to
reconstruct the original visual feature. We show that with this additional
reconstruction constraint, the learned projection function from the seen
classes is able to generalise better to the new unseen classes. Importantly,
the encoder and decoder are linear and symmetric which enable us to develop an
extremely efficient learning algorithm. Extensive experiments on six benchmark
datasets demonstrate that the proposed SAE outperforms significantly the
existing ZSL models with the additional benefit of lower computational cost.
Furthermore, when the SAE is applied to supervised clustering problem, it also
beats the state-of-the-art.Comment: accepted to CVPR201
- …