Recovering a 3D human mesh from a single RGB image is a challenging task due
to depth ambiguity and self-occlusion, resulting in a high degree of
uncertainty. Meanwhile, diffusion models have recently seen much success in
generating high-quality outputs by progressively denoising noisy inputs.
Inspired by their capability, we explore a diffusion-based approach for human
mesh recovery, and propose a Human Mesh Diffusion (HMDiff) framework which
frames mesh recovery as a reverse diffusion process. We also propose a
Distribution Alignment Technique (DAT) that infuses prior distribution
information into the mesh distribution diffusion process, and provides useful
prior knowledge to facilitate the mesh recovery task. Our method achieves
state-of-the-art performance on three widely used datasets. Project page:
https://gongjia0208.github.io/HMDiff/.Comment: Accepted to ICCV 202