Weakly supervised text-based person re-identification (TPRe-ID) seeks to
retrieve images of a target person using textual descriptions, without relying
on identity annotations and is more challenging and practical. The primary
challenge is the intra-class differences, encompassing intra-modal feature
variations and cross-modal semantic gaps. Prior works have focused on
instance-level samples and ignored prototypical features of each person which
are intrinsic and invariant. Toward this, we propose a Cross-Modal Prototypical
Contrastive Learning (CPCL) method. In practice, the CPCL introduces the CLIP
model to weakly supervised TPRe-ID for the first time, mapping visual and
textual instances into a shared latent space. Subsequently, the proposed
Prototypical Multi-modal Memory (PMM) module captures associations between
heterogeneous modalities of image-text pairs belonging to the same person
through the Hybrid Cross-modal Matching (HCM) module in a many-to-many mapping
fashion. Moreover, the Outlier Pseudo Label Mining (OPLM) module further
distinguishes valuable outlier samples from each modality, enhancing the
creation of more reliable clusters by mining implicit relationships between
image-text pairs. Experimental results demonstrate that our proposed CPCL
attains state-of-the-art performance on all three public datasets, with a
significant improvement of 11.58%, 8.77% and 5.25% in Rank@1 accuracy on
CUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively. The code is
available at https://github.com/codeGallery24/CPCL.Comment: 9 pages, 6 figure