9,310 research outputs found
f-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning
When labeled training data is scarce, a promising data augmentation approach
is to generate visual features of unknown classes using their attributes. To
learn the class conditional distribution of CNN features, these models rely on
pairs of image features and class attributes. Hence, they can not make use of
the abundance of unlabeled data samples. In this paper, we tackle any-shot
learning problems i.e. zero-shot and few-shot, in a unified feature generating
framework that operates in both inductive and transductive learning settings.
We develop a conditional generative model that combines the strength of VAE and
GANs and in addition, via an unconditional discriminator, learns the marginal
feature distribution of unlabeled images. We empirically show that our model
learns highly discriminative CNN features for five datasets, i.e. CUB, SUN, AWA
and ImageNet, and establish a new state-of-the-art in any-shot learning, i.e.
inductive and transductive (generalized) zero- and few-shot learning settings.
We also demonstrate that our learned features are interpretable: we visualize
them by inverting them back to the pixel space and we explain them by
generating textual arguments of why they are associated with a certain label.Comment: Accepted at CVPR 201
NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding
Research on depth-based human activity analysis achieved outstanding
performance and demonstrated the effectiveness of 3D representation for action
recognition. The existing depth-based and RGB+D-based action recognition
benchmarks have a number of limitations, including the lack of large-scale
training samples, realistic number of distinct class categories, diversity in
camera views, varied environmental conditions, and variety of human subjects.
In this work, we introduce a large-scale dataset for RGB+D human action
recognition, which is collected from 106 distinct subjects and contains more
than 114 thousand video samples and 8 million frames. This dataset contains 120
different action classes including daily, mutual, and health-related
activities. We evaluate the performance of a series of existing 3D activity
analysis methods on this dataset, and show the advantage of applying deep
learning methods for 3D-based human action recognition. Furthermore, we
investigate a novel one-shot 3D activity recognition problem on our dataset,
and a simple yet effective Action-Part Semantic Relevance-aware (APSR)
framework is proposed for this task, which yields promising results for
recognition of the novel action classes. We believe the introduction of this
large-scale dataset will enable the community to apply, adapt, and develop
various data-hungry learning techniques for depth-based and RGB+D-based human
activity understanding. [The dataset is available at:
http://rose1.ntu.edu.sg/Datasets/actionRecognition.asp]Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence
(TPAMI
Zero-Shot Deep Domain Adaptation
Domain adaptation is an important tool to transfer knowledge about a task
(e.g. classification) learned in a source domain to a second, or target domain.
Current approaches assume that task-relevant target-domain data is available
during training. We demonstrate how to perform domain adaptation when no such
task-relevant target-domain data is available. To tackle this issue, we propose
zero-shot deep domain adaptation (ZDDA), which uses privileged information from
task-irrelevant dual-domain pairs. ZDDA learns a source-domain representation
which is not only tailored for the task of interest but also close to the
target-domain representation. Therefore, the source-domain task of interest
solution (e.g. a classifier for classification tasks) which is jointly trained
with the source-domain representation can be applicable to both the source and
target representations. Using the MNIST, Fashion-MNIST, NIST, EMNIST, and SUN
RGB-D datasets, we show that ZDDA can perform domain adaptation in
classification tasks without access to task-relevant target-domain training
data. We also extend ZDDA to perform sensor fusion in the SUN RGB-D scene
classification task by simulating task-relevant target-domain representations
with task-relevant source-domain data. To the best of our knowledge, ZDDA is
the first domain adaptation and sensor fusion method which requires no
task-relevant target-domain data. The underlying principle is not particular to
computer vision data, but should be extensible to other domains.Comment: This paper is accepted to the European Conference on Computer Vision
(ECCV), 201
- …