13,526 research outputs found
Recommended from our members
Generating a 3d hand model from frontal color and range scans
Realistic 3D modeling of human hand anatomy has a number of important applications, including real-time tracking, pose estimation, and human-computer interaction. However the use of RGB-D sensors to accurately capture the full 3D shape of a hand is limited by self-occlusions, relatively smaller size of the hand and the requirement to capture multiple images. In this paper, we propose a method for generating a detailed, realistic hand model from a single frontal range scan and registered color image. In essence, our method converts this 2.5D data into a fully 3D model. The proposed approach extracts joint locations from the color image using a fingertip and interfinger region detector with a Naive Bayes probabilistic model. Direct correspondence between these joint locations in the range scan and a synthetic hand model are used to perform rigid registration, followed by a thin-plate-spline deformation that non-rigidly registers a synthetic model. This reconstructed model maintains similar geometric properties as the range scan, but also includes the back side of the hand. Experimental results demonstrate the promise of the method to produce detailed and realistic 3D hand geometry
λ¨μΌ μ΄λ―Έμ§λ‘λΆν° μ¬λ¬ μ¬λμ ννμ μ μ 3D μμΈ λ° νν μΆμ
νμλ
Όλ¬Έ (λ°μ¬) -- μμΈλνκ΅ λνμ : 곡과λν μ κΈ°Β·μ 보곡νλΆ, 2021. 2. μ΄κ²½λ¬΄.Human is the most centric and interesting object in our life: many human-centric techniques and studies have been proposed from both industry and academia, such as motion capture and human-computer interaction. Recovery of accurate 3D geometry of human (i.e., 3D human pose and shape) is a key component of the human-centric techniques and studies. With the rapid spread of cameras, a single RGB image has become a popular input, and many single RGB-based 3D human pose and shape estimation methods have been proposed.
The 3D pose and shape of the whole body, which includes hands and face, provides expressive and rich information, including human intention and feeling. Unfortunately, recovering the whole-body 3D pose and shape is greatly challenging; thus, it has been attempted by few works, called expressive methods. Instead of directly solving the expressive 3D pose and shape estimation, the literature has been developed for recovery of the 3D pose and shape of each part (i.e., body, hands, and face) separately, called part-specific methods. There are several more simplifications. For example, many works estimate only 3D pose without shape because additional 3D shape estimation makes the problem much harder. In addition, most works assume a single person case and do not consider a multi-person case. Therefore, there are several ways to categorize current literature; 1) part-specific methods and expressive methods, 2) 3D human pose estimation methods and 3D human pose and shape estimation methods, and 3) methods for a single person and methods for multiple persons. The difficulty increases while the outputs of methods become richer by changing from part-specific to expressive, from 3D pose estimation to 3D pose and shape estimation, and from a single person case to multi-person case.
This dissertation introduces three approaches towards expressive 3D multi-person pose and shape estimation from a single image; thus, the output can finally provide the richest information. The first approach is for 3D multi-person body pose estimation, the second one is 3D multi-person body pose and shape estimation, and the final one is expressive 3D multi-person pose and shape estimation. Each approach tackles critical limitations of previous state-of-the-art methods, thus bringing the literature closer to the real-world environment.
First, a 3D multi-person body pose estimation framework is introduced. In contrast to the single person case, the multi-person case additionally requires camera-relative 3D positions of the persons. Estimating the camera-relative 3D position from a single image involves high depth ambiguity. The proposed framework utilizes a deep image feature with the camera pinhole model to recover the camera-relative 3D position. The proposed framework can be combined with any 3D single person pose and shape estimation methods for 3D multi-person pose and shape. Therefore, the following two approaches focus on the single person case and can be easily extended to the multi-person case by using the framework of the first approach. Second, a 3D multi-person body pose and shape estimation method is introduced. It extends the first approach to additionally predict accurate 3D shape while its accuracy significantly outperforms previous state-of-the-art methods by proposing a new target representation, lixel-based 1D heatmap. Finally, an expressive 3D multi-person pose and shape estimation method is introduced. It integrates the part-specific 3D pose and shape of the above approaches; thus, it can provide expressive 3D human pose and shape. In addition, it boosts the accuracy of the estimated 3D pose and shape by proposing a 3D positional pose-guided 3D rotational pose prediction system.
The proposed approaches successfully overcome the limitations of the previous state-of-the-art methods. The extensive experimental results demonstrate the superiority of the proposed approaches in both qualitative and quantitative ways.μΈκ°μ μ°λ¦¬μ μΌμμνμμ κ°μ₯ μ€μ¬μ΄ λκ³ ν₯λ―Έλ‘μ΄ λμμ΄λ€. κ·Έμ λ°λΌ λͺ¨μ
μΊ‘μ², μΈκ°-μ»΄ν¨ν° μΈν°λ μ
λ± λ§μ μΈκ°μ€μ¬μ κΈ°μ κ³Ό νλ¬Έμ΄ μ°μ
κ³μ νκ³μμ μ μλμλ€. μΈκ°μ μ νν 3D κΈ°ν (μ¦, μΈκ°μ 3D μμΈμ νν)λ₯Ό 볡μνλ κ²μ μΈκ°μ€μ¬ κΈ°μ κ³Ό νλ¬Έμμ κ°μ₯ μ€μν λΆλΆ μ€ νλμ΄λ€. μΉ΄λ©λΌμ λΉ λ₯Έ λμ€νλ‘ μΈν΄ λ¨μΌ μ΄λ―Έμ§λ λ§μ μκ³ λ¦¬μ¦μ λ리 μ°μ΄λ μ
λ ₯μ΄ λμκ³ , κ·Έλ‘ μΈν΄ λ§μ λ¨μΌ μ΄λ―Έμ§ κΈ°λ°μ 3D μΈκ° μμΈ λ° νν μΆμ μκ³ λ¦¬μ¦μ΄ μ μλμλ€.
μκ³Ό λ°μ ν¬ν¨ν μ μ μ 3D μμΈμ ννλ μΈκ°μ μλμ λλμ ν¬ν¨ν ννμ μ΄κ³ νλΆν μ 보λ₯Ό μ 곡νλ€. νμ§λ§ μ μ μ 3D μμΈμ ννλ₯Ό 볡μνλ κ²μ λ§€μ° μ΄λ ΅κΈ° λλ¬Έμ μ€μ§ κ·Ήμμμ λ°©λ²λ§μ΄ μ΄λ₯Ό νκΈ° μν΄ μ μλμκ³ , μ΄λ₯Ό μν λ°©λ²λ€μ ννμ μΈ λ°©λ²μ΄λΌκ³ λΆλ₯Έλ€. ννμ μΈ 3D μμΈμ ννλ₯Ό ν λ²μ 볡μνλ κ² λμ , μ¬λμ λͺΈ, μ, κ·Έλ¦¬κ³ μΌκ΅΄μ 3D μμΈμ ννλ₯Ό λ°λ‘ 볡μνλ λ°©λ²λ€μ΄ μ μλμλ€. μ΄λ¬ν λ°©λ²λ€μ λΆλΆ νΉμ λ°©λ²μ΄λΌκ³ λΆλ₯Έλ€. μ΄λ¬ν λ¬Έμ μ κ°λ¨ν μ΄μΈμλ λͺ κ°μ§μ κ°λ¨νκ° λ μ‘΄μ¬νλ€. μλ₯Ό λ€μ΄, λ§μ λ°©λ²μ 3D ννλ₯Ό μ μΈν 3D μμΈλ§μ μΆμ νλ€. μ΄λ μΆκ°μ μΈ 3D νν μΆμ μ΄ λ¬Έμ λ₯Ό λ μ΄λ ΅κ² λ§λ€κΈ° λλ¬Έμ΄λ€. λν, λλΆλΆμ λ°©λ²μ μ€μ§ λ¨μΌ μ¬λμ κ²½μ°λ§ κ³ λ €νκ³ μ¬λ¬ μ¬λμ κ²½μ°λ κ³ λ €νμ§ μλλ€. κ·Έλ¬λ―λ‘, νμ¬ μ μλ λ°©λ²λ€μ λͺ κ°μ§ κΈ°μ€μ μν΄ λΆλ₯λ μ μλ€; 1) λΆλΆ νΉμ λ°©λ² vs. ννμ λ°©λ², 2) 3D μμΈ μΆμ λ°©λ² vs. 3D μμΈ λ° νν μΆμ λ°©λ², κ·Έλ¦¬κ³ 3) λ¨μΌ μ¬λμ μν λ°©λ² vs. μ¬λ¬ μ¬λμ μν λ°©λ². λΆλΆ νΉμ μμ ννμ μΌλ‘, 3D μμΈ μΆμ μμ 3D μμΈ λ° νν μΆμ μΌλ‘, λ¨μΌ μ¬λμμ μ¬λ¬ μ¬λμΌλ‘ κ°μλ‘ μΆμ μ΄ λ μ΄λ €μμ§μ§λ§, λ νλΆν μ 보λ₯Ό μΆλ ₯ν μ μκ² λλ€.
λ³Έ νμλ
Όλ¬Έμ λ¨μΌ μ΄λ―Έμ§λ‘λΆν° μ¬λ¬ μ¬λμ ννμ μΈ 3D μμΈ λ° νν μΆμ μ ν₯νλ μΈ κ°μ§μ μ κ·Όλ²μ μκ°νλ€. λ°λΌμ μ΅μ’
μ μΌλ‘ μ μλ λ°©λ²μ κ°μ₯ νλΆν μ 보λ₯Ό μ 곡ν μ μλ€. 첫 λ²μ§Έ μ κ·Όλ²μ μ¬λ¬ μ¬λμ μν 3D μμΈ μΆμ μ΄κ³ , λ λ²μ§Έλ μ¬λ¬ μ¬λμ μν 3D μμΈ λ° νν μΆμ μ΄κ³ , κ·Έλ¦¬κ³ λ§μ§λ§μ μ¬λ¬ μ¬λμ μν ννμ μΈ 3D μμΈ λ° νν μΆμ μ μν λ°©λ²μ΄λ€. κ° μ κ·Όλ²μ κΈ°μ‘΄ λ°©λ²λ€μ΄ κ°μ§ μ€μν νκ³μ λ€μ ν΄κ²°νμ¬ μ μλ λ°©λ²λ€μ΄ μ€μνμμ μ°μΌ μ μλλ‘ νλ€.
첫 λ²μ§Έ μ κ·Όλ²μ μ¬λ¬ μ¬λμ μν 3D μμΈ μΆμ νλ μμν¬μ΄λ€. λ¨μΌ μ¬λμ κ²½μ°μλ λ€λ₯΄κ² μ¬λ¬ μ¬λμ κ²½μ° μ¬λλ§λ€ μΉ΄λ©λΌ μλμ μΈ 3D μμΉκ° νμνλ€. μΉ΄λ©λΌ μλμ μΈ 3D μμΉλ₯Ό λ¨μΌ μ΄λ―Έμ§λ‘λΆν° μΆμ νλ κ²μ λ§€μ° λμ κΉμ΄ λͺ¨νΈμ±μ λλ°νλ€. μ μνλ νλ μμν¬λ μ¬μΈ΅ μ΄λ―Έμ§ νΌμ³μ μΉ΄λ©λΌ νν λͺ¨λΈμ μ¬μ©νμ¬ μΉ΄λ©λΌ μλμ μΈ 3D μμΉλ₯Ό 볡μνλ€. μ΄ νλ μμν¬λ μ΄λ€ λ¨μΌ μ¬λμ μν 3D μμΈ λ° νν μΆμ λ°©λ²κ³Ό ν©μ³μ§ μ μκΈ° λλ¬Έμ, λ€μμ μκ°λ λ μ κ·Όλ²μ μ€μ§ λ¨μΌ μ¬λμ μν 3D μμΈ λ° νν μΆμ μ μ΄μ μ λ§μΆλ€. λ€μμ μκ°λ λ μ κ·Όλ²μμ μ μλ λ¨μΌ μ¬λμ μν λ°©λ²λ€μ 첫 λ²μ§Έ μ κ·Όλ²μμ μκ°λλ μ¬λ¬ μ¬λμ μν νλ μμν¬λ₯Ό μ¬μ©νμ¬ μ½κ² μ¬λ¬ μ¬λμ κ²½μ°λ‘ νμ₯ν μ μλ€. λ λ²μ§Έ μ κ·Όλ²μ μ¬λ¬ μ¬λμ μν 3D μμΈ λ° νν μΆμ λ°©λ²μ΄λ€. μ΄ λ°©λ²μ 첫 λ²μ§Έ μ κ·Όλ²μ νμ₯νμ¬ μ νλλ₯Ό μ μ§νλ©΄μ μΆκ°λ‘ 3D ννλ₯Ό μΆμ νκ² νλ€. λμ μ νλλ₯Ό μν΄ λ¦μ
κΈ°λ°μ 1D ννΈλ§΅μ μ μνκ³ , μ΄λ‘ μΈν΄ κΈ°μ‘΄μ λ°νλ λ°©λ²λ€λ³΄λ€ ν° νμΌλ‘ λμ μ±λ₯μ μ»λλ€. λ§μ§λ§ μ κ·Όλ²μ μ¬λ¬ μ¬λμ μν ννμ μΈ 3D μμΈ λ° νν μΆμ λ°©λ²μ΄λ€. μ΄κ²μ λͺΈ, μ, κ·Έλ¦¬κ³ μΌκ΅΄λ§λ€ 3D μμΈ λ° ννλ₯Ό νλλ‘ ν΅ν©νμ¬ ννμ μΈ 3D μμΈ λ° ννλ₯Ό μ»λλ€. κ²λ€κ°, μ΄κ²μ 3D μμΉ ν¬μ¦ κΈ°λ°μ 3D νμ ν¬μ¦ μΆμ κΈ°λ²μ μ μν¨μΌλ‘μ¨ κΈ°μ‘΄μ λ°νλ λ°©λ²λ€λ³΄λ€ ν¨μ¬ λμ μ±λ₯μ μ»λλ€.
μ μλ μ κ·Όλ²λ€μ κΈ°μ‘΄μ λ°νλμλ λ°©λ²λ€μ΄ κ°λ νκ³μ λ€μ μ±κ³΅μ μΌλ‘ 극볡νλ€. κ΄λ²μν μ€νμ κ²°κ³Όκ° μ μ±μ , μ λμ μΌλ‘ μ μνλ λ°©λ²λ€μ ν¨μ©μ±μ 보μ¬μ€λ€.1 Introduction 1
1.1 Background and Research Issues 1
1.2 Outline of the Dissertation 3
2 3D Multi-Person Pose Estimation 7
2.1 Introduction 7
2.2 Related works 10
2.3 Overview of the proposed model 13
2.4 DetectNet 13
2.5 PoseNet 14
2.5.1 Model design 14
2.5.2 Loss function 14
2.6 RootNet 15
2.6.1 Model design 15
2.6.2 Camera normalization 19
2.6.3 Network architecture 19
2.6.4 Loss function 20
2.7 Implementation details 20
2.8 Experiment 21
2.8.1 Dataset and evaluation metric 21
2.8.2 Experimental protocol 22
2.8.3 Ablation study 23
2.8.4 Comparison with state-of-the-art methods 25
2.8.5 Running time of the proposed framework 31
2.8.6 Qualitative results 31
2.9 Conclusion 34
3 3D Multi-Person Pose and Shape Estimation 35
3.1 Introduction 35
3.2 Related works 38
3.3 I2L-MeshNet 41
3.3.1 PoseNet 41
3.3.2 MeshNet 43
3.3.3 Final 3D human pose and mesh 45
3.3.4 Loss functions 45
3.4 Implementation details 47
3.5 Experiment 48
3.5.1 Datasets and evaluation metrics 48
3.5.2 Ablation study 50
3.5.3 Comparison with state-of-the-art methods 57
3.6 Conclusion 60
4 Expressive 3D Multi-Person Pose and Shape Estimation 63
4.1 Introduction 63
4.2 Related works 66
4.3 Pose2Pose 69
4.3.1 PositionNet 69
4.3.2 RotationNet 70
4.4 Expressive 3D human pose and mesh estimation 72
4.4.1 Body part 72
4.4.2 Hand part 73
4.4.3 Face part 73
4.4.4 Training the networks 74
4.4.5 Integration of all parts in the testing stage 74
4.5 Implementation details 77
4.6 Experiment 78
4.6.1 Training sets and evaluation metrics 78
4.6.2 Ablation study 78
4.6.3 Comparison with state-of-the-art methods 82
4.6.4 Running time 87
4.7 Conclusion 87
5 Conclusion and Future Work 89
5.1 Summary and Contributions of the Dissertation 89
5.2 Future Directions 90
5.2.1 Global Context-Aware 3D Multi-Person Pose Estimation 91
5.2.2 Unied Framework for Expressive 3D Human Pose and Shape Estimation 91
5.2.3 Enhancing Appearance Diversity of Images Captured from Multi-View Studio 92
5.2.4 Extension to the video for temporally consistent estimation 94
5.2.5 3D clothed human shape estimation in the wild 94
5.2.6 Robust human action recognition from a video 96
Bibliography 98
κ΅λ¬Έμ΄λ‘ 111Docto
Expressive Body Capture: 3D Hands, Face, and Body from a Single Image
To facilitate the analysis of human actions, interactions and emotions, we
compute a 3D model of human body pose, hand pose, and facial expression from a
single monocular image. To achieve this, we use thousands of 3D scans to train
a new, unified, 3D model of the human body, SMPL-X, that extends SMPL with
fully articulated hands and an expressive face. Learning to regress the
parameters of SMPL-X directly from images is challenging without paired images
and 3D ground truth. Consequently, we follow the approach of SMPLify, which
estimates 2D features and then optimizes model parameters to fit the features.
We improve on SMPLify in several significant ways: (1) we detect 2D features
corresponding to the face, hands, and feet and fit the full SMPL-X model to
these; (2) we train a new neural network pose prior using a large MoCap
dataset; (3) we define a new interpenetration penalty that is both fast and
accurate; (4) we automatically detect gender and the appropriate body models
(male, female, or neutral); (5) our PyTorch implementation achieves a speedup
of more than 8x over Chumpy. We use the new method, SMPLify-X, to fit SMPL-X to
both controlled images and images in the wild. We evaluate 3D accuracy on a new
curated dataset comprising 100 images with pseudo ground-truth. This is a step
towards automatic expressive human capture from monocular RGB data. The models,
code, and data are available for research purposes at
https://smpl-x.is.tue.mpg.de.Comment: To appear in CVPR 201
- β¦