285 research outputs found
3D PersonVLAD: Learning Deep Global Representations for Video-based Person Re-identification
In this paper, we introduce a global video representation to video-based
person re-identification (re-ID) that aggregates local 3D features across the
entire video extent. Most of the existing methods rely on 2D convolutional
networks (ConvNets) to extract frame-wise deep features which are pooled
temporally to generate the video-level representations. However, 2D ConvNets
lose temporal input information immediately after the convolution, and a
separate temporal pooling is limited in capturing human motion in shorter
sequences. To this end, we present a \textit{global} video representation (3D
PersonVLAD), complementary to 3D ConvNets as a novel layer to capture the
appearance and motion dynamics in full-length videos. However, encoding each
video frame in its entirety and computing an aggregate global representation
across all frames is tremendously challenging due to occlusions and
misalignments. To resolve this, our proposed network is further augmented with
3D part alignment module to learn local features through soft-attention module.
These attended features are statistically aggregated to yield
identity-discriminative representations. Our global 3D features are
demonstrated to achieve state-of-the-art results on three benchmark datasets:
MARS \cite{MARS}, iLIDS-VID \cite{VideoRanking}, and PRID 2011Comment: Accepted to appear at IEEE Transactions on Neural Networks and
Learning System
Multi-shot Pedestrian Re-identification via Sequential Decision Making
Multi-shot pedestrian re-identification problem is at the core of
surveillance video analysis. It matches two tracks of pedestrians from
different cameras. In contrary to existing works that aggregate single frames
features by time series model such as recurrent neural network, in this paper,
we propose an interpretable reinforcement learning based approach to this
problem. Particularly, we train an agent to verify a pair of images at each
time. The agent could choose to output the result (same or different) or
request another pair of images to verify (unsure). By this way, our model
implicitly learns the difficulty of image pairs, and postpone the decision when
the model does not accumulate enough evidence. Moreover, by adjusting the
reward for unsure action, we can easily trade off between speed and accuracy.
In three open benchmarks, our method are competitive with the state-of-the-art
methods while only using 3% to 6% images. These promising results demonstrate
that our method is favorable in both efficiency and performance
- …