2,715 research outputs found
PANDA: Pose Aligned Networks for Deep Attribute Modeling
We propose a method for inferring human attributes (such as gender, hair
style, clothes style, expression, action) from images of people under large
variation of viewpoint, pose, appearance, articulation and occlusion.
Convolutional Neural Nets (CNN) have been shown to perform very well on large
scale object recognition problems. In the context of attribute classification,
however, the signal is often subtle and it may cover only a small part of the
image, while the image is dominated by the effects of pose and viewpoint.
Discounting for pose variation would require training on very large labeled
datasets which are not presently available. Part-based models, such as poselets
and DPM have been shown to perform well for this problem but they are limited
by shallow low-level features. We propose a new method which combines
part-based models and deep learning by training pose-normalized CNNs. We show
substantial improvement vs. state-of-the-art methods on challenging attribute
classification tasks in unconstrained settings. Experiments confirm that our
method outperforms both the best part-based methods on this problem and
conventional CNNs trained on the full bounding box of the person.Comment: 8 page
DAP3D-Net: Where, What and How Actions Occur in Videos?
Action parsing in videos with complex scenes is an interesting but
challenging task in computer vision. In this paper, we propose a generic 3D
convolutional neural network in a multi-task learning manner for effective Deep
Action Parsing (DAP3D-Net) in videos. Particularly, in the training phase,
action localization, classification and attributes learning can be jointly
optimized on our appearancemotion data via DAP3D-Net. For an upcoming test
video, we can describe each individual action in the video simultaneously as:
Where the action occurs, What the action is and How the action is performed. To
well demonstrate the effectiveness of the proposed DAP3D-Net, we also
contribute a new Numerous-category Aligned Synthetic Action dataset, i.e.,
NASA, which consists of 200; 000 action clips of more than 300 categories and
with 33 pre-defined action attributes in two hierarchical levels (i.e.,
low-level attributes of basic body part movements and high-level attributes
related to action motion). We learn DAP3D-Net using the NASA dataset and then
evaluate it on our collected Human Action Understanding (HAU) dataset.
Experimental results show that our approach can accurately localize, categorize
and describe multiple actions in realistic videos
CNN Features off-the-shelf: an Astounding Baseline for Recognition
Recent results indicate that the generic descriptors extracted from the
convolutional neural networks are very powerful. This paper adds to the
mounting evidence that this is indeed the case. We report on a series of
experiments conducted for different recognition tasks using the publicly
available code and model of the \overfeat network which was trained to perform
object classification on ILSVRC13. We use features extracted from the \overfeat
network as a generic image representation to tackle the diverse range of
recognition tasks of object image classification, scene recognition, fine
grained recognition, attribute detection and image retrieval applied to a
diverse set of datasets. We selected these tasks and datasets as they gradually
move further away from the original task and data the \overfeat network was
trained to solve. Astonishingly, we report consistent superior results compared
to the highly tuned state-of-the-art systems in all the visual classification
tasks on various datasets. For instance retrieval it consistently outperforms
low memory footprint methods except for sculptures dataset. The results are
achieved using a linear SVM classifier (or distance in case of retrieval)
applied to a feature representation of size 4096 extracted from a layer in the
net. The representations are further modified using simple augmentation
techniques e.g. jittering. The results strongly suggest that features obtained
from deep learning with convolutional nets should be the primary candidate in
most visual recognition tasks.Comment: version 3 revisions: 1)Added results using feature processing and
data augmentation 2)Referring to most recent efforts of using CNN for
different visual recognition tasks 3) updated text/captio
A Discriminative Representation of Convolutional Features for Indoor Scene Recognition
Indoor scene recognition is a multi-faceted and challenging problem due to
the diverse intra-class variations and the confusing inter-class similarities.
This paper presents a novel approach which exploits rich mid-level
convolutional features to categorize indoor scenes. Traditionally used
convolutional features preserve the global spatial structure, which is a
desirable property for general object recognition. However, we argue that this
structuredness is not much helpful when we have large variations in scene
layouts, e.g., in indoor scenes. We propose to transform the structured
convolutional activations to another highly discriminative feature space. The
representation in the transformed space not only incorporates the
discriminative aspects of the target dataset, but it also encodes the features
in terms of the general object categories that are present in indoor scenes. To
this end, we introduce a new large-scale dataset of 1300 object categories
which are commonly present in indoor scenes. Our proposed approach achieves a
significant performance boost over previous state of the art approaches on five
major scene classification datasets
Subset Feature Learning for Fine-Grained Category Classification
Fine-grained categorisation has been a challenging problem due to small
inter-class variation, large intra-class variation and low number of training
images. We propose a learning system which first clusters visually similar
classes and then learns deep convolutional neural network features specific to
each subset. Experiments on the popular fine-grained Caltech-UCSD bird dataset
show that the proposed method outperforms recent fine-grained categorisation
methods under the most difficult setting: no bounding boxes are presented at
test time. It achieves a mean accuracy of 77.5%, compared to the previous best
performance of 73.2%. We also show that progressive transfer learning allows us
to first learn domain-generic features (for bird classification) which can then
be adapted to specific set of bird classes, yielding improvements in accuracy
- …