In this study, we focus on the problem of 3D human mesh recovery from a
single image under obscured conditions. Most state-of-the-art methods aim to
improve 2D alignment technologies, such as spatial averaging and 2D joint
sampling. However, they tend to neglect the crucial aspect of 3D alignment by
improving 3D representations. Furthermore, recent methods struggle to separate
the target human from occlusion or background in crowded scenes as they
optimize the 3D space of target human with 3D joint coordinates as local
supervision. To address these issues, a desirable method would involve a
framework for fusing 2D and 3D features and a strategy for optimizing the 3D
space globally. Therefore, this paper presents 3D JOint contrastive learning
with TRansformers (JOTR) framework for handling occluded 3D human mesh
recovery. Our method includes an encoder-decoder transformer architecture to
fuse 2D and 3D representations for achieving 2D&3D aligned results in a
coarse-to-fine manner and a novel 3D joint contrastive learning approach for
adding explicitly global supervision for the 3D feature space. The contrastive
learning approach includes two contrastive losses: joint-to-joint contrast for
enhancing the similarity of semantically similar voxels (i.e., human joints),
and joint-to-non-joint contrast for ensuring discrimination from others (e.g.,
occlusions and background). Qualitative and quantitative analyses demonstrate
that our method outperforms state-of-the-art competitors on both
occlusion-specific and standard benchmarks, significantly improving the
reconstruction of occluded humans.Comment: Camera Ready Version for ICCV 202