Thanks to the development of 2D keypoint detectors, monocular 3D human pose
estimation (HPE) via 2D-to-3D uplifting approaches have achieved remarkable
improvements. Still, monocular 3D HPE is a challenging problem due to the
inherent depth ambiguities and occlusions. To handle this problem, many
previous works exploit temporal information to mitigate such difficulties.
However, there are many real-world applications where frame sequences are not
accessible. This paper focuses on reconstructing a 3D pose from a single 2D
keypoint detection. Rather than exploiting temporal information, we alleviate
the depth ambiguity by generating multiple 3D pose candidates which can be
mapped to an identical 2D keypoint. We build a novel diffusion-based framework
to effectively sample diverse 3D poses from an off-the-shelf 2D detector. By
considering the correlation between human joints by replacing the conventional
denoising U-Net with graph convolutional network, our approach accomplishes
further performance improvements. We evaluate our method on the widely adopted
Human3.6M and HumanEva-I datasets. Comprehensive experiments are conducted to
prove the efficacy of the proposed method, and they confirm that our model
outperforms state-of-the-art multi-hypothesis 3D HPE methods