10,630 research outputs found
Recurrent Attention Models for Depth-Based Person Identification
We present an attention-based model that reasons on human body shape and
motion dynamics to identify individuals in the absence of RGB information,
hence in the dark. Our approach leverages unique 4D spatio-temporal signatures
to address the identification problem across days. Formulated as a
reinforcement learning task, our model is based on a combination of
convolutional and recurrent neural networks with the goal of identifying small,
discriminative regions indicative of human identity. We demonstrate that our
model produces state-of-the-art results on several published datasets given
only depth images. We further study the robustness of our model towards
viewpoint, appearance, and volumetric changes. Finally, we share insights
gleaned from interpretable 2D, 3D, and 4D visualizations of our model's
spatio-temporal attention.Comment: Computer Vision and Pattern Recognition (CVPR) 201
A Discriminatively Learned CNN Embedding for Person Re-identification
We revisit two popular convolutional neural networks (CNN) in person
re-identification (re-ID), i.e, verification and classification models. The two
models have their respective advantages and limitations due to different loss
functions. In this paper, we shed light on how to combine the two models to
learn more discriminative pedestrian descriptors. Specifically, we propose a
new siamese network that simultaneously computes identification loss and
verification loss. Given a pair of training images, the network predicts the
identities of the two images and whether they belong to the same identity. Our
network learns a discriminative embedding and a similarity measurement at the
same time, thus making full usage of the annotations. Albeit simple, the
learned embedding improves the state-of-the-art performance on two public
person re-ID benchmarks. Further, we show our architecture can also be applied
in image retrieval
Group Membership Prediction
The group membership prediction (GMP) problem involves predicting whether or
not a collection of instances share a certain semantic property. For instance,
in kinship verification given a collection of images, the goal is to predict
whether or not they share a {\it familial} relationship. In this context we
propose a novel probability model and introduce latent {\em view-specific} and
{\em view-shared} random variables to jointly account for the view-specific
appearance and cross-view similarities among data instances. Our model posits
that data from each view is independent conditioned on the shared variables.
This postulate leads to a parametric probability model that decomposes group
membership likelihood into a tensor product of data-independent parameters and
data-dependent factors. We propose learning the data-independent parameters in
a discriminative way with bilinear classifiers, and test our prediction
algorithm on challenging visual recognition tasks such as multi-camera person
re-identification and kinship verification. On most benchmark datasets, our
method can significantly outperform the current state-of-the-art.Comment: accepted for ICCV 201
- …