Modeling feature distances by orientation driven classifiers for person re-identification

Abstract

6siTo tackle the re-identification challenges existing methods propose to directly match image features or to learn the transformation of features that undergoes between two cameras. Other methods learn optimal similarity measures. However, the performance of all these methods are strongly dependent from the person pose and orientation. We focus on this aspect and introduce three main contributions to the field: (i) to propose a method to extract multiple frames of the same person with different orientations in order to capture the complete person appearance; (ii) to learn the pairwise feature dissimilarities space (PFDS) formed by the subspaces of similar and different image pair orientations; and (iii) within each subspace, a classifier is trained to capture the multi-modal inter-camera transformation of pairwise image dissimilarities and to discriminate between positive and negative pairs. The experiments show the superior performance of the proposed approach with respect to state-of-the-art methods using two publicly available benchmark datasets. © 2016 Elsevier Inc. All rights reserved.partially_openopenGarcía, Jorge; Martinel, Niki; Gardel, Alfredo; Bravo, Ignacio; Foresti, Gian Luca; Micheloni, ChristianGarcía, Jorge; Martinel, Niki; Gardel, Alfredo; Bravo, Ignacio; Foresti, Gian Luca; Micheloni, Christia

    Similar works