10,952 research outputs found
Objects2action: Classifying and localizing actions without any video example
The goal of this paper is to recognize actions in video without the need for
examples. Different from traditional zero-shot approaches we do not demand the
design and specification of attribute classifiers and class-to-attribute
mappings to allow for transfer from seen classes to unseen classes. Our key
contribution is objects2action, a semantic word embedding that is spanned by a
skip-gram model of thousands of object categories. Action labels are assigned
to an object encoding of unseen video based on a convex combination of action
and object affinities. Our semantic embedding has three main characteristics to
accommodate for the specifics of actions. First, we propose a mechanism to
exploit multiple-word descriptions of actions and objects. Second, we
incorporate the automated selection of the most responsive objects per action.
And finally, we demonstrate how to extend our zero-shot approach to the
spatio-temporal localization of actions in video. Experiments on four action
datasets demonstrate the potential of our approach
Learning Multimodal Latent Attributes
Abstract—The rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. Attribute learning has emerged as a promising paradigm for bridging the semantic gap and addressing data sparsity via transferring attribute knowledge in object recognition and relatively simple action classification. In this paper, we address the task of attribute learning for understanding multimedia data with sparse and incomplete labels. In particular we focus on videos of social group activities, which are particularly challenging and topical examples of this task because of their multi-modal content and complex and unstructured nature relative to the density of annotations. To solve this problem, we (1) introduce a concept of semi-latent attribute space, expressing user-defined and latent attributes in a unified framework, and (2) propose a novel scalable probabilistic topic model for learning multi-modal semi-latent attributes, which dramatically reduces requirements for an exhaustive accurate attribute ontology and expensive annotation effort. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multimedia sparse data learning tasks including: multi-task learning, learning with label noise, N-shot transfer learning and importantly zero-shot learning
Learning Hypergraph-regularized Attribute Predictors
We present a novel attribute learning framework named Hypergraph-based
Attribute Predictor (HAP). In HAP, a hypergraph is leveraged to depict the
attribute relations in the data. Then the attribute prediction problem is
casted as a regularized hypergraph cut problem in which HAP jointly learns a
collection of attribute projections from the feature space to a hypergraph
embedding space aligned with the attribute space. The learned projections
directly act as attribute classifiers (linear and kernelized). This formulation
leads to a very efficient approach. By considering our model as a multi-graph
cut task, our framework can flexibly incorporate other available information,
in particular class label. We apply our approach to attribute prediction,
Zero-shot and -shot learning tasks. The results on AWA, USAA and CUB
databases demonstrate the value of our methods in comparison with the
state-of-the-art approaches.Comment: This is an attribute learning paper accepted by CVPR 201
Active Transfer Learning with Zero-Shot Priors: Reusing Past Datasets for Future Tasks
How can we reuse existing knowledge, in the form of available datasets, when
solving a new and apparently unrelated target task from a set of unlabeled
data? In this work we make a first contribution to answer this question in the
context of image classification. We frame this quest as an active learning
problem and use zero-shot classifiers to guide the learning process by linking
the new task to the existing classifiers. By revisiting the dual formulation of
adaptive SVM, we reveal two basic conditions to choose greedily only the most
relevant samples to be annotated. On this basis we propose an effective active
learning algorithm which learns the best possible target classification model
with minimum human labeling effort. Extensive experiments on two challenging
datasets show the value of our approach compared to the state-of-the-art active
learning methodologies, as well as its potential to reuse past datasets with
minimal effort for future tasks
Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective
This paper takes a problem-oriented perspective and presents a comprehensive
review of transfer learning methods, both shallow and deep, for cross-dataset
visual recognition. Specifically, it categorises the cross-dataset recognition
into seventeen problems based on a set of carefully chosen data and label
attributes. Such a problem-oriented taxonomy has allowed us to examine how
different transfer learning approaches tackle each problem and how well each
problem has been researched to date. The comprehensive problem-oriented review
of the advances in transfer learning with respect to the problem has not only
revealed the challenges in transfer learning for visual recognition, but also
the problems (e.g. eight of the seventeen problems) that have been scarcely
studied. This survey not only presents an up-to-date technical review for
researchers, but also a systematic approach and a reference for a machine
learning practitioner to categorise a real problem and to look up for a
possible solution accordingly
Probabilistic Label Relation Graphs with Ising Models
We consider classification problems in which the label space has structure. A
common example is hierarchical label spaces, corresponding to the case where
one label subsumes another (e.g., animal subsumes dog). But labels can also be
mutually exclusive (e.g., dog vs cat) or unrelated (e.g., furry, carnivore). To
jointly model hierarchy and exclusion relations, the notion of a HEX (hierarchy
and exclusion) graph was introduced in [7]. This combined a conditional random
field (CRF) with a deep neural network (DNN), resulting in state of the art
results when applied to visual object classification problems where the
training labels were drawn from different levels of the ImageNet hierarchy
(e.g., an image might be labeled with the basic level category "dog", rather
than the more specific label "husky"). In this paper, we extend the HEX model
to allow for soft or probabilistic relations between labels, which is useful
when there is uncertainty about the relationship between two labels (e.g., an
antelope is "sort of" furry, but not to the same degree as a grizzly bear). We
call our new model pHEX, for probabilistic HEX. We show that the pHEX graph can
be converted to an Ising model, which allows us to use existing off-the-shelf
inference methods (in contrast to the HEX method, which needed specialized
inference algorithms). Experimental results show significant improvements in a
number of large-scale visual object classification tasks, outperforming the
previous HEX model.Comment: International Conference on Computer Vision (2015
- …