Visible-infrared person re-identification (VI-ReID) aims to retrieve images
of the same pedestrian from different modalities, where the challenges lie in
the significant modality discrepancy. To alleviate the modality gap, recent
methods generate intermediate images by GANs, grayscaling, or mixup strategies.
However, these methods could ntroduce extra noise, and the semantic
correspondence between the two modalities is not well learned. In this paper,
we propose a Patch-Mixed Cross-Modality framework (PMCM), where two images of
the same person from two modalities are split into patches and stitched into a
new one for model learning. In this way, the modellearns to recognize a person
through patches of different styles, and the modality semantic correspondence
is directly embodied. With the flexible image generation strategy, the
patch-mixed images freely adjust the ratio of different modality patches, which
could further alleviate the modality imbalance problem. In addition, the
relationship between identity centers among modalities is explored to further
reduce the modality variance, and the global-to-part constraint is introduced
to regularize representation learning of part features. On two VI-ReID
datasets, we report new state-of-the-art performance with the proposed method.Comment: IJCAI2