38 research outputs found
STA: Spatial-Temporal Attention for Large-Scale Video-based Person Re-Identification
In this work, we propose a novel Spatial-Temporal Attention (STA) approach to
tackle the large-scale person re-identification task in videos. Different from
the most existing methods, which simply compute representations of video clips
using frame-level aggregation (e.g. average pooling), the proposed STA adopts a
more effective way for producing robust clip-level feature representation.
Concretely, our STA fully exploits those discriminative parts of one target
person in both spatial and temporal dimensions, which results in a 2-D
attention score matrix via inter-frame regularization to measure the
importances of spatial parts across different frames. Thus, a more robust
clip-level feature representation can be generated according to a weighted sum
operation guided by the mined 2-D attention score matrix. In this way, the
challenging cases for video-based person re-identification such as pose
variation and partial occlusion can be well tackled by the STA. We conduct
extensive experiments on two large-scale benchmarks, i.e. MARS and
DukeMTMC-VideoReID. In particular, the mAP reaches 87.7% on MARS, which
significantly outperforms the state-of-the-arts with a large margin of more
than 11.6%.Comment: Accepted as a conference paper at AAAI 201
Clip-level feature aggregation : a key factor for video-based person re-identification
In the task of video-based person re-identification, features
of persons in the query and gallery sets are compared to search the
best match. Generally, most existing methods aggregate the frame-level
features together using a temporal method to generate the clip-level fea-
tures, instead of the sequence-level representations. In this paper, we
propose a new method that aggregates the clip-level features to obtain
the sequence-level representations of persons, which consists of two parts,
i.e., Average Aggregation Strategy (AAS) and Raw Feature Utilization
(RFU). AAS makes use of all frames in a video sequence to generate
a better representation of a person, while RFU investigates how batch
normalization operation influences feature representations in person re-
identification. The experimental results demonstrate that our method
can boost the performance of existing models for better accuracy. In
particular, we achieve 87.7% rank-1 and 82.3% mAP on MARS dataset
without any post-processing procedure, which outperforms the existing
state-of-the-art
Real-time Person Re-identification at the Edge: A Mixed Precision Approach
A critical part of multi-person multi-camera tracking is person
re-identification (re-ID) algorithm, which recognizes and retains identities of
all detected unknown people throughout the video stream. Many re-ID algorithms
today exemplify state of the art results, but not much work has been done to
explore the deployment of such algorithms for computation and power constrained
real-time scenarios. In this paper, we study the effect of using a light-weight
model, MobileNet-v2 for re-ID and investigate the impact of single (FP32)
precision versus half (FP16) precision for training on the server and inference
on the edge nodes. We further compare the results with the baseline model which
uses ResNet-50 on state of the art benchmarks including CUHK03, Market-1501,
and Duke-MTMC. The MobileNet-V2 mixed precision training method can improve
both inference throughput on the edge node, and training time on server
reaching to 27.77fps and , respectively and decreases
power consumption on the edge node by , while it deteriorates
accuracy only 5.6\% in respect to ResNet-50 single precision on the average for
three different datasets. The code and pre-trained networks are publicly
available at https://github.com/TeCSAR-UNCC/person-reid.Comment: This is a pre-print of an article published in International
Conference on Image Analysis and Recognition (ICIAR 2019), Lecture Notes in
Computer Science. The final authenticated version is available online at
https://doi.org/10.1007/978-3-030-27272-2_