Hands, one of the most dynamic parts of our body, suffer from blur due to
their active movements. However, previous 3D hand mesh recovery methods have
mainly focused on sharp hand images rather than considering blur due to the
absence of datasets providing blurry hand images. We first present a novel
dataset BlurHand, which contains blurry hand images with 3D groundtruths. The
BlurHand is constructed by synthesizing motion blur from sequential sharp hand
images, imitating realistic and natural motion blurs. In addition to the new
dataset, we propose BlurHandNet, a baseline network for accurate 3D hand mesh
recovery from a blurry hand image. Our BlurHandNet unfolds a blurry input image
to a 3D hand mesh sequence to utilize temporal information in the blurry input
image, while previous works output a static single hand mesh. We demonstrate
the usefulness of BlurHand for the 3D hand mesh recovery from blurry images in
our experiments. The proposed BlurHandNet produces much more robust results on
blurry images while generalizing well to in-the-wild images. The training codes
and BlurHand dataset are available at
https://github.com/JaehaKim97/BlurHand_RELEASE.Comment: Accepted at CVPR 202