22 research outputs found
Part-Based Deep Hashing for Large-Scale Person Re-Identification
© 1992-2012 IEEE. Large-scale is a trend in person re-identi-fication (re-id). It is important that real-time search be performed in a large gallery. While previous methods mostly focus on discriminative learning, this paper makes the attempt in integrating deep learning and hashing into one framework to evaluate the efficiency and accuracy for large-scale person re-id. We integrate spatial information for discriminative visual representation by partitioning the pedestrian image into horizontal parts. Specifically, Part-based Deep Hashing (PDH) is proposed, in which batches of triplet samples are employed as the input of the deep hashing architecture. Each triplet sample contains two pedestrian images (or parts) with the same identity and one pedestrian image (or part) of the different identity. A triplet loss function is employed with a constraint that the Hamming distance of pedestrian images (or parts) with the same identity is smaller than ones with the different identity. In the experiment, we show that the proposed PDH method yields very competitive re-id accuracy on the large-scale Market-1501 and Market-1501+500K datasets
Faster Person Re-Identification
Fast person re-identification (ReID) aims to search person images quickly and
accurately. The main idea of recent fast ReID methods is the hashing algorithm,
which learns compact binary codes and performs fast Hamming distance and
counting sort. However, a very long code is needed for high accuracy (e.g.
2048), which compromises search speed. In this work, we introduce a new
solution for fast ReID by formulating a novel Coarse-to-Fine (CtF) hashing code
search strategy, which complementarily uses short and long codes, achieving
both faster speed and better accuracy. It uses shorter codes to coarsely rank
broad matching similarities and longer codes to refine only a few top
candidates for more accurate instance ReID. Specifically, we design an
All-in-One (AiO) framework together with a Distance Threshold Optimization
(DTO) algorithm. In AiO, we simultaneously learn and enhance multiple codes of
different lengths in a single model. It learns multiple codes in a pyramid
structure, and encourage shorter codes to mimic longer codes by
self-distillation. DTO solves a complex threshold search problem by a simple
optimization process, and the balance between accuracy and speed is easily
controlled by a single parameter. It formulates the optimization target as a
score that can be optimised by Gaussian cumulative distribution
functions. Experimental results on 2 datasets show that our proposed method
(CtF) is not only 8% more accurate but also 5x faster than contemporary hashing
ReID methods. Compared with non-hashing ReID methods, CtF is faster
with comparable accuracy. Code is available at
https://github.com/wangguanan/light-reid.Comment: accepted by ECCV2020, https://github.com/wangguanan/light-rei