40,041 research outputs found
f-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning
When labeled training data is scarce, a promising data augmentation approach
is to generate visual features of unknown classes using their attributes. To
learn the class conditional distribution of CNN features, these models rely on
pairs of image features and class attributes. Hence, they can not make use of
the abundance of unlabeled data samples. In this paper, we tackle any-shot
learning problems i.e. zero-shot and few-shot, in a unified feature generating
framework that operates in both inductive and transductive learning settings.
We develop a conditional generative model that combines the strength of VAE and
GANs and in addition, via an unconditional discriminator, learns the marginal
feature distribution of unlabeled images. We empirically show that our model
learns highly discriminative CNN features for five datasets, i.e. CUB, SUN, AWA
and ImageNet, and establish a new state-of-the-art in any-shot learning, i.e.
inductive and transductive (generalized) zero- and few-shot learning settings.
We also demonstrate that our learned features are interpretable: we visualize
them by inverting them back to the pixel space and we explain them by
generating textual arguments of why they are associated with a certain label.Comment: Accepted at CVPR 201
Class Anchor Margin Loss for Content-Based Image Retrieval
The performance of neural networks in content-based image retrieval (CBIR) is
highly influenced by the chosen loss (objective) function. The majority of
objective functions for neural models can be divided into metric learning and
statistical learning. Metric learning approaches require a pair mining strategy
that often lacks efficiency, while statistical learning approaches are not
generating highly compact features due to their indirect feature optimization.
To this end, we propose a novel repeller-attractor loss that falls in the
metric learning paradigm, yet directly optimizes for the L2 metric without the
need of generating pairs. Our loss is formed of three components. One leading
objective ensures that the learned features are attracted to each designated
learnable class anchor. The second loss component regulates the anchors and
forces them to be separable by a margin, while the third objective ensures that
the anchors do not collapse to zero. Furthermore, we develop a more efficient
two-stage retrieval system by harnessing the learned class anchors during the
first stage of the retrieval process, eliminating the need of comparing the
query with every image in the database. We establish a set of four datasets
(CIFAR-100, Food-101, SVHN, and Tiny ImageNet) and evaluate the proposed
objective in the context of few-shot and full-set training on the CBIR task, by
using both convolutional and transformer architectures. Compared to existing
objective functions, our empirical evidence shows that the proposed objective
is generating superior and more consistent results
Generating Visual Representations for Zero-Shot Classification
This paper addresses the task of learning an image clas-sifier when some
categories are defined by semantic descriptions only (e.g. visual attributes)
while the others are defined by exemplar images as well. This task is often
referred to as the Zero-Shot classification task (ZSC). Most of the previous
methods rely on learning a common embedding space allowing to compare visual
features of unknown categories with semantic descriptions. This paper argues
that these approaches are limited as i) efficient discrimi-native classifiers
can't be used ii) classification tasks with seen and unseen categories
(Generalized Zero-Shot Classification or GZSC) can't be addressed efficiently.
In contrast , this paper suggests to address ZSC and GZSC by i) learning a
conditional generator using seen classes ii) generate artificial training
examples for the categories without exemplars. ZSC is then turned into a
standard supervised learning problem. Experiments with 4 generative models and
5 datasets experimentally validate the approach, giving state-of-the-art
results on both ZSC and GZSC
Zero-Shot Visual Recognition using Semantics-Preserving Adversarial Embedding Networks
We propose a novel framework called Semantics-Preserving Adversarial
Embedding Network (SP-AEN) for zero-shot visual recognition (ZSL), where test
images and their classes are both unseen during training. SP-AEN aims to tackle
the inherent problem --- semantic loss --- in the prevailing family of
embedding-based ZSL, where some semantics would be discarded during training if
they are non-discriminative for training classes, but could become critical for
recognizing test classes. Specifically, SP-AEN prevents the semantic loss by
introducing an independent visual-to-semantic space embedder which disentangles
the semantic space into two subspaces for the two arguably conflicting
objectives: classification and reconstruction. Through adversarial learning of
the two subspaces, SP-AEN can transfer the semantics from the reconstructive
subspace to the discriminative one, accomplishing the improved zero-shot
recognition of unseen classes. Comparing with prior works, SP-AEN can not only
improve classification but also generate photo-realistic images, demonstrating
the effectiveness of semantic preservation. On four popular benchmarks: CUB,
AWA, SUN and aPY, SP-AEN considerably outperforms other state-of-the-art
methods by an absolute performance difference of 12.2\%, 9.3\%, 4.0\%, and
3.6\% in terms of harmonic mean value
- …