3,736 research outputs found
Exploring semantic inter-class relationships (SIR) for zero-shot action recognition
© Copyright 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Automatically recognizing a large number of action categories from videos is of significant importance for video understanding. Most existing works focused on the design of more discriminative feature representation, and have achieved promising results when the positive samples are enough. However, very limited efforts were spent on recognizing a novel action without any positive exemplars, which is often the case in the real settings due to the large amount of action classes and the users' queries dramatic variations. To address this issue, we propose to perform action recognition when no positive exemplars of that class are provided, which is often known as the zero-shot learning. Different from other zero-shot learning approaches, which exploit attributes as the intermediate layer for the knowledge transfer, our main contribution is SIR, which directly leverages the semantic inter-class relationships between the known and unknown actions followed by label transfer learning. The inter-class semantic relationships are automatically measured by continuous word vectors, which learned by the skip-gram model using the large-scale text corpus. Extensive experiments on the UCF101 dataset validate the superiority of our method over fully-supervised approaches using few positive exemplars
Rethinking Zero-shot Video Classification: End-to-end Training for Realistic Applications
Trained on large datasets, deep learning (DL) can accurately classify videos into hundreds of diverse classes. However, video data is expensive to annotate. Zero-shot learning (ZSL) proposes one solution to this problem. ZSL trains a model once, and generalizes to new tasks whose classes are not present in the training dataset. We propose the first end-to-end algorithm for ZSL in video classification. Our training procedure builds on insights from recent video classification literature and uses a trainable 3D CNN to learn the visual features. This is in contrast to previous video ZSL methods, which use pretrained feature extractors. We also extend the current benchmarking paradigm: Previous techniques aim to make the test task unknown at training time but fall short of this goal. We encourage domain shift across training and test data and disallow tailoring a ZSL model to a specific test dataset. We outperform the state-of-the-art by a wide margin. Our code, evaluation procedure and model weights are available at this http URL
Learning joint feature adaptation for zero-shot recognition
Zero-shot recognition (ZSR) aims to recognize target-domain data instances of unseen classes based on the models learned from associated pairs of seen-class source and target domain data. One of the key challenges in ZSR is the relative scarcity of source-domain features (e.g. one feature vector per class), which do not fully account for wide variability in target-domain instances. In this paper we propose a novel framework of learning data-dependent feature transforms for scoring similarity between an arbitrary pair of source and target data instances to account for the wide variability in target domain. Our proposed approach is based on optimizing over a parameterized family of local feature displacements that maximize the source-target adaptive similarity functions. Accordingly we propose formulating zero-shot learning (ZSL) using latent structural SVMs to learn our similarity functions from training data. As demonstration we design a specific algorithm under the proposed framework involving bilinear similarity functions and regularized least squares as penalties for feature displacement. We test our approach on several benchmark datasets for ZSR and show significant improvement over the state-of-the-art. For instance, on aP&Y dataset we can achieve 80.89% in terms of recognition accuracy, outperforming the state-of-the-art by 11.15%
CLASTER: Clustering with Reinforcement Learning for Zero-Shot Action Recognition
Zero-shot action recognition is the task of recognizing action classes
without visual examples, only with a semantic embedding which relates unseen to
seen classes. The problem can be seen as learning a function which generalizes
well to instances of unseen classes without losing discrimination between
classes. Neural networks can model the complex boundaries between visual
classes, which explains their success as supervised models. However, in
zero-shot learning, these highly specialized class boundaries may not transfer
well from seen to unseen classes. In this paper, we propose a clustering-based
model, which considers all training samples at once, instead of optimizing for
each instance individually. We optimize the clustering using Reinforcement
Learning which we show is critical for our approach to work. We call the
proposed method CLASTER and observe that it consistently improves over the
state-of-the-art in all standard datasets, UCF101, HMDB51, and Olympic Sports;
both in the standard zero-shot evaluation and the generalized zero-shot
learning
Rethinking Zero-shot Video Classification: End-to-end Training for Realistic Applications
Trained on large datasets, deep learning (DL) can accurately classify videos into hundreds of diverse classes. However, video data is expensive to annotate. Zero-shot learning (ZSL) proposes one solution to this problem. ZSL trains a model once, and generalizes to new tasks whose classes are not present in the training dataset. We propose the first end-to-end algorithm for ZSL in video classification. Our training procedure builds on insights from recent video classification literature and uses a trainable 3D CNN to learn the visual features. This is in contrast to previous video ZSL methods, which use pretrained feature extractors. We also extend the current benchmarking paradigm: Previous techniques aim to make the test task unknown at training time but fall short of this goal. We encourage domain shift across training and test data and disallow tailoring a ZSL model to a specific test dataset. We outperform the state-of-the-art by a wide margin. Our code, evaluation procedure and model weights are available at this http URL
Fairer Evaluation of Zero Shot Action Recognition in Videos
Zero-shot learning (ZSL) for human action recognition (HAR) aims to recognise video action classes that have never been seen during model training. This is achieved by building mappings between visual and semantic embeddings. These visual embeddings are typically provided via a pre-trained deep neural network (DNN). The premise of ZSL is that the training and testing classes should be disjoint. In the parallel domain of ZSL for image input, the widespread poor evaluation protocol of pre-training on ZSL test classes has been highlighted. This is akin to providing a sneak preview of the evaluation classes. In this work, we investigate the extent to which this evaluation protocol has been used in ZSL for human action recognition research work. We show that in the field of ZSL for HAR, accuracies for overlapping classes are being boosted by between 5.75% to 51.94% depending on the use of visual and semantic features as a result of this flawed evaluation protocol. To assist other research ers in avoiding this problem in the future, we provide annotated versions of the relevant benchmark ZSL test datasets in the HAR field: UCF101 and HMDB51 datasets - highlighting overlaps to pre-training datasets in the field
Zero-shot Skeleton-based Action Recognition via Mutual Information Estimation and Maximization
Zero-shot skeleton-based action recognition aims to recognize actions of
unseen categories after training on data of seen categories. The key is to
build the connection between visual and semantic space from seen to unseen
classes. Previous studies have primarily focused on encoding sequences into a
singular feature vector, with subsequent mapping the features to an identical
anchor point within the embedded space. Their performance is hindered by 1) the
ignorance of the global visual/semantic distribution alignment, which results
in a limitation to capture the true interdependence between the two spaces. 2)
the negligence of temporal information since the frame-wise features with rich
action clues are directly pooled into a single feature vector. We propose a new
zero-shot skeleton-based action recognition method via mutual information (MI)
estimation and maximization. Specifically, 1) we maximize the MI between visual
and semantic space for distribution alignment; 2) we leverage the temporal
information for estimating the MI by encouraging MI to increase as more frames
are observed. Extensive experiments on three large-scale skeleton action
datasets confirm the effectiveness of our method. Code:
https://github.com/YujieOuO/SMIE.Comment: Accepted by ACM MM 202
DASZL: Dynamic Action Signatures for Zero-shot Learning
There are many realistic applications of activity recognition where the set
of potential activity descriptions is combinatorially large. This makes
end-to-end supervised training of a recognition system impractical as no
training set is practically able to encompass the entire label set. In this
paper, we present an approach to fine-grained recognition that models
activities as compositions of dynamic action signatures. This compositional
approach allows us to reframe fine-grained recognition as zero-shot activity
recognition, where a detector is composed "on the fly" from simple
first-principles state machines supported by deep-learned components. We
evaluate our method on the Olympic Sports and UCF101 datasets, where our model
establishes a new state of the art under multiple experimental paradigms. We
also extend this method to form a unique framework for zero-shot joint
segmentation and classification of activities in video and demonstrate the
first results in zero-shot decoding of complex action sequences on a
widely-used surgical dataset. Lastly, we show that we can use off-the-shelf
object detectors to recognize activities in completely de-novo settings with no
additional training.Comment: 10 pages, 4 figures, 3 tables, AAAI2021 submissio
- …