34 research outputs found
Deep Poselets for Human Detection
We address the problem of detecting people in natural scenes using a part
approach based on poselets. We propose a bootstrapping method that allows us to
collect millions of weakly labeled examples for each poselet type. We use these
examples to train a Convolutional Neural Net to discriminate different poselet
types and separate them from the background class. We then use the trained CNN
as a way to represent poselet patches with a Pose Discriminative Feature (PDF)
vector -- a compact 256-dimensional feature vector that is effective at
discriminating pose from appearance. We train the poselet model on top of PDF
features and combine them with object-level CNNs for detection and bounding box
prediction. The resulting model leads to state-of-the-art performance for human
detection on the PASCAL datasets
PANDA: Pose Aligned Networks for Deep Attribute Modeling
We propose a method for inferring human attributes (such as gender, hair
style, clothes style, expression, action) from images of people under large
variation of viewpoint, pose, appearance, articulation and occlusion.
Convolutional Neural Nets (CNN) have been shown to perform very well on large
scale object recognition problems. In the context of attribute classification,
however, the signal is often subtle and it may cover only a small part of the
image, while the image is dominated by the effects of pose and viewpoint.
Discounting for pose variation would require training on very large labeled
datasets which are not presently available. Part-based models, such as poselets
and DPM have been shown to perform well for this problem but they are limited
by shallow low-level features. We propose a new method which combines
part-based models and deep learning by training pose-normalized CNNs. We show
substantial improvement vs. state-of-the-art methods on challenging attribute
classification tasks in unconstrained settings. Experiments confirm that our
method outperforms both the best part-based methods on this problem and
conventional CNNs trained on the full bounding box of the person.Comment: 8 page
Hierarchical Cascade of Classifiers for Efficient Poselet Evaluation
Poselets have been used in a variety of computer vision tasks, such as detection, segmentation, action classification, pose estimation and action recognition, often achieving state-of-the-art performance. Poselet evaluation, however, is computationally intensive as it involves running thousands of scanning window classifiers. We present an algorithm for training a hierarchical cascade of part-based detectors and apply it to speed up poselet evaluation. Our cascade hierarchy leverages common components shared across poselets. We generate a family of cascade hierarchies, including trees that grow logarithmically on the number of poselet classifiers. Our algorithm, under some reasonable assumptions, finds the optimal tree structure that maximizes speed for a given target detection rate. We test our system on the PASCAL dataset and show an order of magnitude speedup at less than 1% loss in AP
Beyond Frontal Faces: Improving Person Recognition Using Multiple Cues
We explore the task of recognizing peoples' identities in photo albums in an
unconstrained setting. To facilitate this, we introduce the new People In Photo
Albums (PIPA) dataset, consisting of over 60000 instances of 2000 individuals
collected from public Flickr photo albums. With only about half of the person
images containing a frontal face, the recognition task is very challenging due
to the large variations in pose, clothing, camera viewpoint, image resolution
and illumination. We propose the Pose Invariant PErson Recognition (PIPER)
method, which accumulates the cues of poselet-level person recognizers trained
by deep convolutional networks to discount for the pose variations, combined
with a face recognizer and a global recognizer. Experiments on three different
settings confirm that in our unconstrained setup PIPER significantly improves
on the performance of DeepFace, which is one of the best face recognizers as
measured on the LFW dataset
Improving Image Classification with Location Context
With the widespread availability of cellphones and cameras that have GPS
capabilities, it is common for images being uploaded to the Internet today to
have GPS coordinates associated with them. In addition to research that tries
to predict GPS coordinates from visual features, this also opens up the door to
problems that are conditioned on the availability of GPS coordinates. In this
work, we tackle the problem of performing image classification with location
context, in which we are given the GPS coordinates for images in both the train
and test phases. We explore different ways of encoding and extracting features
from the GPS coordinates, and show how to naturally incorporate these features
into a Convolutional Neural Network (CNN), the current state-of-the-art for
most image classification and recognition problems. We also show how it is
possible to simultaneously learn the optimal pooling radii for a subset of our
features within the CNN framework. To evaluate our model and to help promote
research in this area, we identify a set of location-sensitive concepts and
annotate a subset of the Yahoo Flickr Creative Commons 100M dataset that has
GPS coordinates with these concepts, which we make publicly available. By
leveraging location context, we are able to achieve almost a 7% gain in mean
average precision