Cloth-Changing Person Re-Identification (CC-ReID) is a common and realistic
problem since fashion constantly changes over time and people's aesthetic
preferences are not set in stone. While most existing cloth-changing ReID
methods focus on learning cloth-agnostic identity representations from coarse
semantic cues (e.g. silhouettes and part segmentation maps), they neglect the
continuous shape distributions at the pixel level. In this paper, we propose
Continuous Surface Correspondence Learning (CSCL), a new shape embedding
paradigm for cloth-changing ReID. CSCL establishes continuous correspondences
between a 2D image plane and a canonical 3D body surface via pixel-to-vertex
classification, which naturally aligns a person image to the surface of a 3D
human model and simultaneously obtains pixel-wise surface embeddings. We
further extract fine-grained shape features from the learned surface embeddings
and then integrate them with global RGB features via a carefully designed
cross-modality fusion module. The shape embedding paradigm based on 2D-3D
correspondences remarkably enhances the model's global understanding of human
body shape. To promote the study of ReID under clothing change, we construct 3D
Dense Persons (DP3D), which is the first large-scale cloth-changing ReID
dataset that provides densely annotated 2D-3D correspondences and a precise 3D
mesh for each person image, while containing diverse cloth-changing cases over
all four seasons. Experiments on both cloth-changing and cloth-consistent ReID
benchmarks validate the effectiveness of our method.Comment: Accepted by ACM MM 202