2,649 research outputs found
f-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning
When labeled training data is scarce, a promising data augmentation approach
is to generate visual features of unknown classes using their attributes. To
learn the class conditional distribution of CNN features, these models rely on
pairs of image features and class attributes. Hence, they can not make use of
the abundance of unlabeled data samples. In this paper, we tackle any-shot
learning problems i.e. zero-shot and few-shot, in a unified feature generating
framework that operates in both inductive and transductive learning settings.
We develop a conditional generative model that combines the strength of VAE and
GANs and in addition, via an unconditional discriminator, learns the marginal
feature distribution of unlabeled images. We empirically show that our model
learns highly discriminative CNN features for five datasets, i.e. CUB, SUN, AWA
and ImageNet, and establish a new state-of-the-art in any-shot learning, i.e.
inductive and transductive (generalized) zero- and few-shot learning settings.
We also demonstrate that our learned features are interpretable: we visualize
them by inverting them back to the pixel space and we explain them by
generating textual arguments of why they are associated with a certain label.Comment: Accepted at CVPR 201
Generative Adversarial Zero-Shot Relational Learning for Knowledge Graphs
Large-scale knowledge graphs (KGs) are shown to become more important in
current information systems. To expand the coverage of KGs, previous studies on
knowledge graph completion need to collect adequate training instances for
newly-added relations. In this paper, we consider a novel formulation,
zero-shot learning, to free this cumbersome curation. For newly-added
relations, we attempt to learn their semantic features from their text
descriptions and hence recognize the facts of unseen relations with no examples
being seen. For this purpose, we leverage Generative Adversarial Networks
(GANs) to establish the connection between text and knowledge graph domain: The
generator learns to generate the reasonable relation embeddings merely with
noisy text descriptions. Under this setting, zero-shot learning is naturally
converted to a traditional supervised classification task. Empirically, our
method is model-agnostic that could be potentially applied to any version of KG
embeddings, and consistently yields performance improvements on NELL and Wiki
dataset
A Generative Model For Zero Shot Learning Using Conditional Variational Autoencoders
Zero shot learning in Image Classification refers to the setting where images
from some novel classes are absent in the training data but other information
such as natural language descriptions or attribute vectors of the classes are
available. This setting is important in the real world since one may not be
able to obtain images of all the possible classes at training. While previous
approaches have tried to model the relationship between the class attribute
space and the image space via some kind of a transfer function in order to
model the image space correspondingly to an unseen class, we take a different
approach and try to generate the samples from the given attributes, using a
conditional variational autoencoder, and use the generated samples for
classification of the unseen classes. By extensive testing on four benchmark
datasets, we show that our model outperforms the state of the art, particularly
in the more realistic generalized setting, where the training classes can also
appear at the test time along with the novel classes
- …