Improved Res2Net model for Person re-identification

Abstract

Person re-identification has become a very popular research topic in the computer vision community owing to its numerous applications and growing importance in visual surveillance. Person re-identification remains challenging due to occlusion, illumination and significant intra-class variations across different cameras. In this paper, we propose a multi-task network base on an improved Res2Net model that simultaneously computes the identification loss and verification loss of two pedestrian images. Given a pair of pedestrian images, the system predicts the identities of the two input images and whether they belong to the same identity. In order to obtain deeper feature information of pedestrians, we propose to use the latest Res2Net model for feature extraction of each input image. Experiments on several large-scale person re-identification benchmark datasets demonstrate the accuracy of our approach. For example, rank-1 accuracies are 83.18% (+1.38) and 93.14% (+0.84) for the DukeMTMC and Market-1501 datasets, respectively. The proposed method shows encouraging improvements compared with state-of-the-art methods.Comment: 6 page

    Similar works

    Full text

    thumbnail-image

    Available Versions