26,826 research outputs found

    Monocular Expressive Body Regression through Body-Driven Attention

    Full text link
    To understand how people look, interact, or perform tasks, we need to quickly and accurately capture their 3D body, face, and hands together from an RGB image. Most existing methods focus only on parts of the body. A few recent approaches reconstruct full expressive 3D humans from images using 3D body models that include the face and hands. These methods are optimization-based and thus slow, prone to local optima, and require 2D keypoints as input. We address these limitations by introducing ExPose (EXpressive POse and Shape rEgression), which directly regresses the body, face, and hands, in SMPL-X format, from an RGB image. This is a hard problem due to the high dimensionality of the body and the lack of expressive training data. Additionally, hands and faces are much smaller than the body, occupying very few image pixels. This makes hand and face estimation hard when body images are downscaled for neural networks. We make three main contributions. First, we account for the lack of training data by curating a dataset of SMPL-X fits on in-the-wild images. Second, we observe that body estimation localizes the face and hands reasonably well. We introduce body-driven attention for face and hand regions in the original image to extract higher-resolution crops that are fed to dedicated refinement modules. Third, these modules exploit part-specific knowledge from existing face- and hand-only datasets. ExPose estimates expressive 3D humans more accurately than existing optimization methods at a small fraction of the computational cost. Our data, model and code are available for research at https://expose.is.tue.mpg.de .Comment: Accepted in ECCV'20. Project page: http://expose.is.tue.mpg.d

    3D 손 포즈 인식을 μœ„ν•œ 인쑰 λ°μ΄ν„°μ˜ 이용

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : μœ΅ν•©κ³Όν•™κΈ°μˆ λŒ€ν•™μ› μœ΅ν•©κ³Όν•™λΆ€(지λŠ₯ν˜•μœ΅ν•©μ‹œμŠ€ν…œμ „κ³΅), 2021.8. μ–‘ν•œμ—΄.3D hand pose estimation (HPE) based on RGB images has been studied for a long time. Relevant methods have focused mainly on optimization of neural framework for graphically connected finger joints. Training RGB-based HPE models has not been easy to train because of the scarcity on RGB hand pose datasets; unlike human body pose datasets, the finger joints that span hand postures are structured delicately and exquisitely. Such structure makes accurately annotating each joint with unique 3D world coordinates difficult, which is why many conventional methods rely on synthetic data samples to cover large variations of hand postures. Synthetic dataset consists of very precise annotations of ground truths, and further allows control over the variety of data samples, yielding a learning model to be trained with a large pose space. Most of the studies, however, have performed frame-by-frame estimation based on independent static images. Synthetic visual data can provide practically infinite diversity and rich labels, while avoiding ethical issues with privacy and bias. However, for many tasks, current models trained on synthetic data generalize poorly to real data. The task of 3D human hand pose estimation is a particularly interesting example of this synthetic-to-real problem, because learning-based approaches perform reasonably well given real training data, yet labeled 3D poses are extremely difficult to obtain in the wild, limiting scalability. In this dissertation, we attempt to not only consider the appearance of a hand but incorporate the temporal movement information of a hand in motion into the learning framework for better 3D hand pose estimation performance, which leads to the necessity of a large scale dataset with sequential RGB hand images. We propose a novel method that generates a synthetic dataset that mimics natural human hand movements by re-engineering annotations of an extant static hand pose dataset into pose-flows. With the generated dataset, we train a newly proposed recurrent framework, exploiting visuo-temporal features from sequential images of synthetic hands in motion and emphasizing temporal smoothness of estimations with a temporal consistency constraint. Our novel training strategy of detaching the recurrent layer of the framework during domain finetuning from synthetic to real allows preservation of the visuo-temporal features learned from sequential synthetic hand images. Hand poses that are sequentially estimated consequently produce natural and smooth hand movements which lead to more robust estimations. We show that utilizing temporal information for 3D hand pose estimation significantly enhances general pose estimations by outperforming state-of-the-art methods in experiments on hand pose estimation benchmarks. Since a fixed set of dataset provides a finite distribution of data samples, the generalization of a learning pose estimation network is limited in terms of pose, RGB and viewpoint spaces. We further propose to augment the data automatically such that the augmented pose sampling is performed in favor of training pose estimators generalization performance. Such auto-augmentation of poses is performed within a learning feature space in order to avoid computational burden of generating synthetic sample for every iteration of updates. The proposed effort can be considered as generating and utilizing synthetic samples for network training in the feature space. This allows training efficiency by requiring less number of real data samples, enhanced generalization power over multiple dataset domains and estimation performance caused by efficient augmentation.2D μ΄λ―Έμ§€μ—μ„œ μ‚¬λžŒμ˜ 손 λͺ¨μ–‘κ³Ό 포즈λ₯Ό μΈμ‹ν•˜κ³  κ΅¬ν˜„νλŠ” μ—°κ΅¬λŠ” 각 손가락 μ‘°μΈνŠΈλ“€μ˜ 3D μœ„μΉ˜λ₯Ό κ²€μΆœν•˜λŠ” 것을 λͺ©ν‘œλ‘œν•œλ‹€. 손 ν¬μ¦ˆλŠ” 손가락 μ‘°μΈνŠΈλ“€λ‘œ κ΅¬μ„±λ˜μ–΄ 있고 손λͺ© κ΄€μ ˆλΆ€ν„° MCP, PIP, DIP μ‘°μΈνŠΈλ“€λ‘œ μ‚¬λžŒ 손을 κ΅¬μ„±ν•˜λŠ” 신체적 μš”μ†Œλ“€μ„ μ˜λ―Έν•œλ‹€. 손 포즈 μ •λ³΄λŠ” λ‹€μ–‘ν•œ λΆ„μ•Όμ—μ„œ ν™œμš©λ μˆ˜ 있고 손 제슀쳐 감지 연ꡬ λΆ„μ•Όμ—μ„œ 손 포즈 정보가 맀우 ν›Œλ₯­ν•œ μž…λ ₯ νŠΉμ§• κ°’μœΌλ‘œ μ‚¬μš©λœλ‹€. μ‚¬λžŒμ˜ 손 포즈 κ²€μΆœ 연ꡬλ₯Ό μ‹€μ œ μ‹œμŠ€ν…œμ— μ μš©ν•˜κΈ° μœ„ν•΄μ„œλŠ” 높은 정확도, μ‹€μ‹œκ°„μ„±, λ‹€μ–‘ν•œ 기기에 μ‚¬μš© κ°€λŠ₯ν•˜λ„λ‘ κ°€λ²Όμš΄ λͺ¨λΈμ΄ ν•„μš”ν•˜κ³ , 이것을 κ°€λŠ₯μΌ€ ν•˜κΈ° μœ„ν•΄μ„œ ν•™μŠ΅ν•œ 인곡신경망 λͺ¨λΈμ„ ν•™μŠ΅ν•˜λŠ”λ°μ—λŠ” λ§Žμ€ 데이터가 ν•„μš”λ‘œ ν•œλ‹€. ν•˜μ§€λ§Œ μ‚¬λžŒ 손 포즈λ₯Ό μΈ‘μ •ν•˜λŠ” 기계듀이 κ½€ λΆˆμ•ˆμ •ν•˜κ³ , 이 기계듀을 μž₯μ°©ν•˜κ³  μžˆλŠ” μ΄λ―Έμ§€λŠ” μ‚¬λžŒ 손 ν”ΌλΆ€ μƒ‰κ³ΌλŠ” 많이 달라 ν•™μŠ΅μ— μ‚¬μš©ν•˜κΈ°κ°€ μ μ ˆν•˜μ§€ μ•Šλ‹€. 그러기 λ•Œλ¬Έμ— λ³Έ λ…Όλ¬Έμ—μ„œλŠ” μ΄λŸ¬ν•œ 문제λ₯Ό ν•΄κ²°ν•˜κΈ° μœ„ν•΄ 인곡적으둜 λ§Œλ“€μ–΄λ‚Έ 데이터λ₯Ό μž¬κ°€κ³΅ 및 μ¦λŸ‰ν•˜μ—¬ ν•™μŠ΅μ— μ‚¬μš©ν•˜κ³ , 그것을 톡해 더 쒋은 ν•™μŠ΅μ„±κ³Όλ₯Ό 이루렀고 ν•œλ‹€. 인곡적으둜 λ§Œλ“€μ–΄λ‚Έ μ‚¬λžŒ 손 이미지 데이터듀은 μ‹€μ œ μ‚¬λžŒ 손 ν”ΌλΆ€μƒ‰κ³ΌλŠ” λΉ„μŠ·ν• μ§€μ–Έμ • λ””ν…ŒμΌν•œ ν…μŠ€μ³κ°€ 많이 달라, μ‹€μ œλ‘œ 인곡 데이터λ₯Ό ν•™μŠ΅ν•œ λͺ¨λΈμ€ μ‹€μ œ 손 λ°μ΄ν„°μ—μ„œ μ„±λŠ₯이 ν˜„μ €νžˆ 많이 떨어진닀. 이 두 λ°μ΄νƒ€μ˜ 도메인을 쀄이기 μœ„ν•΄μ„œ μ²«λ²ˆμ§Έλ‘œλŠ” μ‚¬λžŒμ†μ˜ ꡬ쑰λ₯Ό λ¨Όμ € ν•™μŠ΅ μ‹œν‚€κΈ°μœ„ν•΄, 손 λͺ¨μ…˜μ„ μž¬κ°€κ³΅ν•˜μ—¬ κ·Έ μ›€μ§μž„ ꡬ쑰λ₯Ό ν•™μŠ€ν•œ μ‹œκ°„μ  정보λ₯Ό λΊ€ λ‚˜λ¨Έμ§€λ§Œ μ‹€μ œ 손 이미지 데이터에 ν•™μŠ΅ν•˜μ˜€κ³  크게 효과λ₯Ό λ‚΄μ—ˆλ‹€. μ΄λ•Œ μ‹€μ œ μ‚¬λžŒ 손λͺ¨μ…˜μ„ λͺ¨λ°©ν•˜λŠ” 방법둠을 μ œμ‹œν•˜μ˜€λ‹€. λ‘λ²ˆμ§Έλ‘œλŠ” 두 도메인이 λ‹€λ₯Έ 데이터λ₯Ό λ„€νŠΈμ›Œν¬ 피쳐 κ³΅κ°„μ—μ„œ alignμ‹œμΌ°λ‹€. κ·ΈλΏλ§Œμ•„λ‹ˆλΌ 인곡 포즈λ₯Ό νŠΉμ • λ°μ΄ν„°λ“€λ‘œ augmentν•˜μ§€ μ•Šκ³  λ„€νŠΈμ›Œν¬κ°€ 많이 보지 λͺ»ν•œ ν¬μ¦ˆκ°€ λ§Œλ“€μ–΄μ§€λ„λ‘ ν•˜λ‚˜μ˜ ν™•λ₯  λͺ¨λΈλ‘œμ„œ μ„€μ •ν•˜μ—¬ κ·Έκ²ƒμ—μ„œ μƒ˜ν”Œλ§ν•˜λŠ” ꡬ쑰λ₯Ό μ œμ•ˆν•˜μ˜€λ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” 인곡 데이터λ₯Ό 더 효과적으둜 μ‚¬μš©ν•˜μ—¬ annotation이 μ–΄λ €μš΄ μ‹€μ œ 데이터λ₯Ό 더 λͺ¨μœΌλŠ” μˆ˜κ³ μŠ€λŸ¬μ›€ 없이 인곡 데이터듀을 더 효과적으둜 λ§Œλ“€μ–΄ λ‚΄λŠ” 것 뿐만 μ•„λ‹ˆλΌ, 더 μ•ˆμ „ν•˜κ³  지역적 νŠΉμ§•κ³Ό μ‹œκ°„μ  νŠΉμ§•μ„ ν™œμš©ν•΄μ„œ 포즈의 μ„±λŠ₯을 κ°œμ„ ν•˜λŠ” 방법듀을 μ œμ•ˆν–ˆλ‹€. λ˜ν•œ, λ„€νŠΈμ›Œν¬κ°€ 슀슀둜 ν•„μš”ν•œ 데이터λ₯Ό μ°Ύμ•„μ„œ ν•™μŠ΅ν• μˆ˜ μžˆλŠ” μžλ™ 데이터 μ¦λŸ‰ 방법둠도 ν•¨κ»˜ μ œμ•ˆν•˜μ˜€λ‹€. μ΄λ ‡κ²Œ μ œμ•ˆλœ 방법을 κ²°ν•©ν•΄μ„œ 더 λ‚˜μ€ 손 포즈의 μ„±λŠ₯을 ν–₯상 ν•  수 μžˆλ‹€.1. Introduction 1 2. Related Works 14 3. Preliminaries: 3D Hand Mesh Model 27 4. SeqHAND: RGB-sequence-based 3D Hand Pose and Shape Estimation 31 5. Hand Pose Auto-Augment 66 6. Conclusion 85 Abstract (Korea) 101 κ°μ‚¬μ˜ κΈ€ 103λ°•

    단일 μ΄λ―Έμ§€λ‘œλΆ€ν„° μ—¬λŸ¬ μ‚¬λžŒμ˜ ν‘œν˜„μ  μ „μ‹  3D μžμ„Έ 및 ν˜•νƒœ μΆ”μ •

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·정보곡학뢀, 2021. 2. 이경무.Human is the most centric and interesting object in our life: many human-centric techniques and studies have been proposed from both industry and academia, such as motion capture and human-computer interaction. Recovery of accurate 3D geometry of human (i.e., 3D human pose and shape) is a key component of the human-centric techniques and studies. With the rapid spread of cameras, a single RGB image has become a popular input, and many single RGB-based 3D human pose and shape estimation methods have been proposed. The 3D pose and shape of the whole body, which includes hands and face, provides expressive and rich information, including human intention and feeling. Unfortunately, recovering the whole-body 3D pose and shape is greatly challenging; thus, it has been attempted by few works, called expressive methods. Instead of directly solving the expressive 3D pose and shape estimation, the literature has been developed for recovery of the 3D pose and shape of each part (i.e., body, hands, and face) separately, called part-specific methods. There are several more simplifications. For example, many works estimate only 3D pose without shape because additional 3D shape estimation makes the problem much harder. In addition, most works assume a single person case and do not consider a multi-person case. Therefore, there are several ways to categorize current literature; 1) part-specific methods and expressive methods, 2) 3D human pose estimation methods and 3D human pose and shape estimation methods, and 3) methods for a single person and methods for multiple persons. The difficulty increases while the outputs of methods become richer by changing from part-specific to expressive, from 3D pose estimation to 3D pose and shape estimation, and from a single person case to multi-person case. This dissertation introduces three approaches towards expressive 3D multi-person pose and shape estimation from a single image; thus, the output can finally provide the richest information. The first approach is for 3D multi-person body pose estimation, the second one is 3D multi-person body pose and shape estimation, and the final one is expressive 3D multi-person pose and shape estimation. Each approach tackles critical limitations of previous state-of-the-art methods, thus bringing the literature closer to the real-world environment. First, a 3D multi-person body pose estimation framework is introduced. In contrast to the single person case, the multi-person case additionally requires camera-relative 3D positions of the persons. Estimating the camera-relative 3D position from a single image involves high depth ambiguity. The proposed framework utilizes a deep image feature with the camera pinhole model to recover the camera-relative 3D position. The proposed framework can be combined with any 3D single person pose and shape estimation methods for 3D multi-person pose and shape. Therefore, the following two approaches focus on the single person case and can be easily extended to the multi-person case by using the framework of the first approach. Second, a 3D multi-person body pose and shape estimation method is introduced. It extends the first approach to additionally predict accurate 3D shape while its accuracy significantly outperforms previous state-of-the-art methods by proposing a new target representation, lixel-based 1D heatmap. Finally, an expressive 3D multi-person pose and shape estimation method is introduced. It integrates the part-specific 3D pose and shape of the above approaches; thus, it can provide expressive 3D human pose and shape. In addition, it boosts the accuracy of the estimated 3D pose and shape by proposing a 3D positional pose-guided 3D rotational pose prediction system. The proposed approaches successfully overcome the limitations of the previous state-of-the-art methods. The extensive experimental results demonstrate the superiority of the proposed approaches in both qualitative and quantitative ways.인간은 우리의 μΌμƒμƒν™œμ—μ„œ κ°€μž₯ 쀑심이 되고 ν₯미둜운 λŒ€μƒμ΄λ‹€. 그에 따라 λͺ¨μ…˜ 캑처, 인간-컴퓨터 μΈν„°λ ‰μ…˜ λ“± λ§Žμ€ μΈκ°„μ€‘μ‹¬μ˜ 기술과 학문이 산업계와 ν•™κ³„μ—μ„œ μ œμ•ˆλ˜μ—ˆλ‹€. μΈκ°„μ˜ μ •ν™•ν•œ 3D κΈ°ν•˜ (즉, μΈκ°„μ˜ 3D μžμ„Έμ™€ ν˜•νƒœ)λ₯Ό λ³΅μ›ν•˜λŠ” 것은 인간쀑심 기술과 ν•™λ¬Έμ—μ„œ κ°€μž₯ μ€‘μš”ν•œ λΆ€λΆ„ 쀑 ν•˜λ‚˜μ΄λ‹€. μΉ΄λ©”λΌμ˜ λΉ λ₯Έ λŒ€μ€‘ν™”λ‘œ 인해 단일 μ΄λ―Έμ§€λŠ” λ§Žμ€ μ•Œκ³ λ¦¬μ¦˜μ˜ 널리 μ“°μ΄λŠ” μž…λ ₯이 λ˜μ—ˆκ³ , 그둜 인해 λ§Žμ€ 단일 이미지 기반의 3D 인간 μžμ„Έ 및 ν˜•νƒœ μΆ”μ • μ•Œκ³ λ¦¬μ¦˜μ΄ μ œμ•ˆλ˜μ—ˆλ‹€. 손과 λ°œμ„ ν¬ν•¨ν•œ μ „μ‹ μ˜ 3D μžμ„Έμ™€ ν˜•νƒœλŠ” μΈκ°„μ˜ μ˜λ„μ™€ λŠλ‚Œμ„ ν¬ν•¨ν•œ ν‘œν˜„μ μ΄κ³  ν’λΆ€ν•œ 정보λ₯Ό μ œκ³΅ν•œλ‹€. ν•˜μ§€λ§Œ μ „μ‹ μ˜ 3D μžμ„Έμ™€ ν˜•νƒœλ₯Ό λ³΅μ›ν•˜λŠ” 것은 맀우 μ–΄λ ΅κΈ° λ•Œλ¬Έμ— 였직 κ·Ήμ†Œμˆ˜μ˜ λ°©λ²•λ§Œμ΄ 이λ₯Ό ν’€κΈ° μœ„ν•΄ μ œμ•ˆλ˜μ—ˆκ³ , 이λ₯Ό μœ„ν•œ 방법듀을 ν‘œν˜„μ μΈ 방법이라고 λΆ€λ₯Έλ‹€. ν‘œν˜„μ μΈ 3D μžμ„Έμ™€ ν˜•νƒœλ₯Ό ν•œ λ²ˆμ— λ³΅μ›ν•˜λŠ” 것 λŒ€μ‹ , μ‚¬λžŒμ˜ λͺΈ, 손, 그리고 μ–Όκ΅΄μ˜ 3D μžμ„Έμ™€ ν˜•νƒœλ₯Ό λ”°λ‘œ λ³΅μ›ν•˜λŠ” 방법듀이 μ œμ•ˆλ˜μ—ˆλ‹€. μ΄λŸ¬ν•œ 방법듀을 λΆ€λΆ„ 특유 방법이라고 λΆ€λ₯Έλ‹€. μ΄λŸ¬ν•œ 문제의 간단화 이외에도 λͺ‡ κ°€μ§€μ˜ 간단화가 더 μ‘΄μž¬ν•œλ‹€. 예λ₯Ό λ“€μ–΄, λ§Žμ€ 방법은 3D ν˜•νƒœλ₯Ό μ œμ™Έν•œ 3D μžμ„Έλ§Œμ„ μΆ”μ •ν•œλ‹€. μ΄λŠ” 좔가적인 3D ν˜•νƒœ 좔정이 문제λ₯Ό 더 μ–΄λ ΅κ²Œ λ§Œλ“€κΈ° λ•Œλ¬Έμ΄λ‹€. λ˜ν•œ, λŒ€λΆ€λΆ„μ˜ 방법은 였직 단일 μ‚¬λžŒμ˜ 경우만 κ³ λ €ν•˜κ³  μ—¬λŸ¬ μ‚¬λžŒμ˜ κ²½μš°λŠ” κ³ λ €ν•˜μ§€ μ•ŠλŠ”λ‹€. κ·ΈλŸ¬λ―€λ‘œ, ν˜„μž¬ μ œμ•ˆλœ 방법듀은 λͺ‡ 가지 기쀀에 μ˜ν•΄ λΆ„λ₯˜λ  수 μžˆλ‹€; 1) λΆ€λΆ„ 특유 방법 vs. ν‘œν˜„μ  방법, 2) 3D μžμ„Έ μΆ”μ • 방법 vs. 3D μžμ„Έ 및 ν˜•νƒœ μΆ”μ • 방법, 그리고 3) 단일 μ‚¬λžŒμ„ μœ„ν•œ 방법 vs. μ—¬λŸ¬ μ‚¬λžŒμ„ μœ„ν•œ 방법. λΆ€λΆ„ νŠΉμœ μ—μ„œ ν‘œν˜„μ μœΌλ‘œ, 3D μžμ„Έ μΆ”μ •μ—μ„œ 3D μžμ„Έ 및 ν˜•νƒœ μΆ”μ •μœΌλ‘œ, 단일 μ‚¬λžŒμ—μ„œ μ—¬λŸ¬ μ‚¬λžŒμœΌλ‘œ 갈수둝 좔정이 더 μ–΄λ €μ›Œμ§€μ§€λ§Œ, 더 ν’λΆ€ν•œ 정보λ₯Ό 좜λ ₯ν•  수 있게 λœλ‹€. λ³Έ ν•™μœ„λ…Όλ¬Έμ€ 단일 μ΄λ―Έμ§€λ‘œλΆ€ν„° μ—¬λŸ¬ μ‚¬λžŒμ˜ ν‘œν˜„μ μΈ 3D μžμ„Έ 및 ν˜•νƒœ 좔정을 ν–₯ν•˜λŠ” μ„Έ κ°€μ§€μ˜ 접근법을 μ†Œκ°œν•œλ‹€. λ”°λΌμ„œ μ΅œμ’…μ μœΌλ‘œ μ œμ•ˆλœ 방법은 κ°€μž₯ ν’λΆ€ν•œ 정보λ₯Ό μ œκ³΅ν•  수 μžˆλ‹€. 첫 번째 접근법은 μ—¬λŸ¬ μ‚¬λžŒμ„ μœ„ν•œ 3D μžμ„Έ 좔정이고, 두 λ²ˆμ§ΈλŠ” μ—¬λŸ¬ μ‚¬λžŒμ„ μœ„ν•œ 3D μžμ„Έ 및 ν˜•νƒœ 좔정이고, 그리고 λ§ˆμ§€λ§‰μ€ μ—¬λŸ¬ μ‚¬λžŒμ„ μœ„ν•œ ν‘œν˜„μ μΈ 3D μžμ„Έ 및 ν˜•νƒœ 좔정을 μœ„ν•œ 방법이닀. 각 접근법은 κΈ°μ‘΄ 방법듀이 가진 μ€‘μš”ν•œ ν•œκ³„μ λ“€μ„ ν•΄κ²°ν•˜μ—¬ μ œμ•ˆλœ 방법듀이 μ‹€μƒν™œμ—μ„œ 쓰일 수 μžˆλ„λ‘ ν•œλ‹€. 첫 번째 접근법은 μ—¬λŸ¬ μ‚¬λžŒμ„ μœ„ν•œ 3D μžμ„Έ μΆ”μ • ν”„λ ˆμž„μ›Œν¬μ΄λ‹€. 단일 μ‚¬λžŒμ˜ κ²½μš°μ™€λŠ” λ‹€λ₯΄κ²Œ μ—¬λŸ¬ μ‚¬λžŒμ˜ 경우 μ‚¬λžŒλ§ˆλ‹€ 카메라 μƒλŒ€μ μΈ 3D μœ„μΉ˜κ°€ ν•„μš”ν•˜λ‹€. 카메라 μƒλŒ€μ μΈ 3D μœ„μΉ˜λ₯Ό 단일 μ΄λ―Έμ§€λ‘œλΆ€ν„° μΆ”μ •ν•˜λŠ” 것은 맀우 높은 깊이 λͺ¨ν˜Έμ„±μ„ λ™λ°˜ν•œλ‹€. μ œμ•ˆν•˜λŠ” ν”„λ ˆμž„μ›Œν¬λŠ” 심측 이미지 피쳐와 카메라 핀홀 λͺ¨λΈμ„ μ‚¬μš©ν•˜μ—¬ 카메라 μƒλŒ€μ μΈ 3D μœ„μΉ˜λ₯Ό λ³΅μ›ν•œλ‹€. 이 ν”„λ ˆμž„μ›Œν¬λŠ” μ–΄λ–€ 단일 μ‚¬λžŒμ„ μœ„ν•œ 3D μžμ„Έ 및 ν˜•νƒœ μΆ”μ • 방법과 ν•©μ³μ§ˆ 수 있기 λ•Œλ¬Έμ—, λ‹€μŒμ— μ†Œκ°œλ  두 접근법은 였직 단일 μ‚¬λžŒμ„ μœ„ν•œ 3D μžμ„Έ 및 ν˜•νƒœ 좔정에 μ΄ˆμ μ„ λ§žμΆ˜λ‹€. λ‹€μŒμ— μ†Œκ°œλ  두 μ ‘κ·Όλ²•μ—μ„œ μ œμ•ˆλœ 단일 μ‚¬λžŒμ„ μœ„ν•œ 방법듀은 첫 번째 μ ‘κ·Όλ²•μ—μ„œ μ†Œκ°œλ˜λŠ” μ—¬λŸ¬ μ‚¬λžŒμ„ μœ„ν•œ ν”„λ ˆμž„μ›Œν¬λ₯Ό μ‚¬μš©ν•˜μ—¬ μ‰½κ²Œ μ—¬λŸ¬ μ‚¬λžŒμ˜ 경우둜 ν™•μž₯ν•  수 μžˆλ‹€. 두 번째 접근법은 μ—¬λŸ¬ μ‚¬λžŒμ„ μœ„ν•œ 3D μžμ„Έ 및 ν˜•νƒœ μΆ”μ • 방법이닀. 이 방법은 첫 번째 접근법을 ν™•μž₯ν•˜μ—¬ 정확도λ₯Ό μœ μ§€ν•˜λ©΄μ„œ μΆ”κ°€λ‘œ 3D ν˜•νƒœλ₯Ό μΆ”μ •ν•˜κ²Œ ν•œλ‹€. 높은 정확도λ₯Ό μœ„ν•΄ λ¦­μ…€ 기반의 1D νžˆνŠΈλ§΅μ„ μ œμ•ˆν•˜κ³ , 이둜 인해 기쑴에 λ°œν‘œλœ 방법듀보닀 큰 폭으둜 높은 μ„±λŠ₯을 μ–»λŠ”λ‹€. λ§ˆμ§€λ§‰ 접근법은 μ—¬λŸ¬ μ‚¬λžŒμ„ μœ„ν•œ ν‘œν˜„μ μΈ 3D μžμ„Έ 및 ν˜•νƒœ μΆ”μ • 방법이닀. 이것은 λͺΈ, 손, 그리고 μ–Όκ΅΄λ§ˆλ‹€ 3D μžμ„Έ 및 ν˜•νƒœλ₯Ό ν•˜λ‚˜λ‘œ ν†΅ν•©ν•˜μ—¬ ν‘œν˜„μ μΈ 3D μžμ„Έ 및 ν˜•νƒœλ₯Ό μ–»λŠ”λ‹€. κ²Œλ‹€κ°€, 이것은 3D μœ„μΉ˜ 포즈 기반의 3D νšŒμ „ 포즈 좔정기법을 μ œμ•ˆν•¨μœΌλ‘œμ¨ 기쑴에 λ°œν‘œλœ 방법듀보닀 훨씬 높은 μ„±λŠ₯을 μ–»λŠ”λ‹€. μ œμ•ˆλœ 접근법듀은 기쑴에 λ°œν‘œλ˜μ—ˆλ˜ 방법듀이 κ°–λŠ” ν•œκ³„μ λ“€μ„ μ„±κ³΅μ μœΌλ‘œ κ·Ήλ³΅ν•œλ‹€. κ΄‘λ²”μœ„ν•œ μ‹€ν—˜μ  κ²°κ³Όκ°€ 정성적, μ •λŸ‰μ μœΌλ‘œ μ œμ•ˆν•˜λŠ” λ°©λ²•λ“€μ˜ νš¨μš©μ„±μ„ 보여쀀닀.1 Introduction 1 1.1 Background and Research Issues 1 1.2 Outline of the Dissertation 3 2 3D Multi-Person Pose Estimation 7 2.1 Introduction 7 2.2 Related works 10 2.3 Overview of the proposed model 13 2.4 DetectNet 13 2.5 PoseNet 14 2.5.1 Model design 14 2.5.2 Loss function 14 2.6 RootNet 15 2.6.1 Model design 15 2.6.2 Camera normalization 19 2.6.3 Network architecture 19 2.6.4 Loss function 20 2.7 Implementation details 20 2.8 Experiment 21 2.8.1 Dataset and evaluation metric 21 2.8.2 Experimental protocol 22 2.8.3 Ablation study 23 2.8.4 Comparison with state-of-the-art methods 25 2.8.5 Running time of the proposed framework 31 2.8.6 Qualitative results 31 2.9 Conclusion 34 3 3D Multi-Person Pose and Shape Estimation 35 3.1 Introduction 35 3.2 Related works 38 3.3 I2L-MeshNet 41 3.3.1 PoseNet 41 3.3.2 MeshNet 43 3.3.3 Final 3D human pose and mesh 45 3.3.4 Loss functions 45 3.4 Implementation details 47 3.5 Experiment 48 3.5.1 Datasets and evaluation metrics 48 3.5.2 Ablation study 50 3.5.3 Comparison with state-of-the-art methods 57 3.6 Conclusion 60 4 Expressive 3D Multi-Person Pose and Shape Estimation 63 4.1 Introduction 63 4.2 Related works 66 4.3 Pose2Pose 69 4.3.1 PositionNet 69 4.3.2 RotationNet 70 4.4 Expressive 3D human pose and mesh estimation 72 4.4.1 Body part 72 4.4.2 Hand part 73 4.4.3 Face part 73 4.4.4 Training the networks 74 4.4.5 Integration of all parts in the testing stage 74 4.5 Implementation details 77 4.6 Experiment 78 4.6.1 Training sets and evaluation metrics 78 4.6.2 Ablation study 78 4.6.3 Comparison with state-of-the-art methods 82 4.6.4 Running time 87 4.7 Conclusion 87 5 Conclusion and Future Work 89 5.1 Summary and Contributions of the Dissertation 89 5.2 Future Directions 90 5.2.1 Global Context-Aware 3D Multi-Person Pose Estimation 91 5.2.2 Unied Framework for Expressive 3D Human Pose and Shape Estimation 91 5.2.3 Enhancing Appearance Diversity of Images Captured from Multi-View Studio 92 5.2.4 Extension to the video for temporally consistent estimation 94 5.2.5 3D clothed human shape estimation in the wild 94 5.2.6 Robust human action recognition from a video 96 Bibliography 98 ꡭ문초둝 111Docto
    • …
    corecore