Adversarial erasing attention for person re-identification in camera networks under complex environments


Person re-identification (Re-ID) in camera networks under complex environments has achieved promising performance using deep feature representations. However, most approaches usually ignore to learn features from non-salient parts of pedestrian, which results in an incomplete pedestrian representation. In this paper, we propose a novel person Re-ID method named Adversarial Erasing Attention (AEA) to mine discriminative completed features using an adversarial way. Specifically, the proposed AEA consists of the basic network and the complementary network. On the one hand, original pedestrian images are used to train the basic network in order to extract global and local deep features. On the other hand, to learn features complementary to the basic network, we propose the adversarial erasing operation, that locates non-salient areas with the help of attention map, to generate erased pedestrian images. Then, we utilize them to train the complementary network and adopt the dynamic strategy to match the dynamic status of AEA in the learning process. Hence, the diversity of training samples is enriched and the complementary network could discover new clues when learning deep features. Finally, we combine the features learned from the basic and complementary networks to represent the pedestrian image. Experiments on three databases (Market1501, CUHK03 and DukeMTMC-reID) demonstrate the proposed AEA achieves great performances

    Similar works