756 research outputs found
GANerated Hands for Real-time 3D Hand Tracking from Monocular RGB
We address the highly challenging problem of real-time 3D hand tracking based
on a monocular RGB-only sequence. Our tracking method combines a convolutional
neural network with a kinematic 3D hand model, such that it generalizes well to
unseen data, is robust to occlusions and varying camera viewpoints, and leads
to anatomically plausible as well as temporally smooth hand motions. For
training our CNN we propose a novel approach for the synthetic generation of
training data that is based on a geometrically consistent image-to-image
translation network. To be more specific, we use a neural network that
translates synthetic images to "real" images, such that the so-generated images
follow the same statistical distribution as real-world hand images. For
training this translation network we combine an adversarial loss and a
cycle-consistency loss with a geometric consistency loss in order to preserve
geometric properties (such as hand pose) during translation. We demonstrate
that our hand tracking system outperforms the current state-of-the-art on
challenging RGB-only footage
3D μ ν¬μ¦ μΈμμ μν μΈμ‘° λ°μ΄ν°μ μ΄μ©
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : μ΅ν©κ³ΌνκΈ°μ λνμ μ΅ν©κ³ΌνλΆ(μ§λ₯νμ΅ν©μμ€ν
μ 곡), 2021.8. μνμ΄.3D hand pose estimation (HPE) based on RGB images has been studied for a long time. Relevant methods have focused mainly on optimization of neural framework for graphically connected finger joints. Training RGB-based HPE models has not been easy to train because of the scarcity on RGB hand pose datasets; unlike human body pose datasets, the finger joints that span hand postures are structured delicately and exquisitely. Such structure makes accurately annotating each joint with unique 3D world coordinates difficult, which is why many conventional methods rely on synthetic data samples to cover large variations of hand postures.
Synthetic dataset consists of very precise annotations of ground truths, and further allows control over the variety of data samples, yielding a learning model to be trained with a large pose space. Most of the studies, however, have performed frame-by-frame estimation based on independent static images. Synthetic visual data can provide practically infinite diversity and rich labels, while avoiding ethical issues with privacy and bias. However, for many tasks, current models trained on synthetic data generalize poorly to real data. The task of 3D human hand pose estimation is a particularly interesting example of this synthetic-to-real problem, because learning-based approaches perform reasonably well given real training data, yet labeled 3D poses are extremely difficult to obtain in the wild, limiting scalability.
In this dissertation, we attempt to not only consider the appearance of a hand but incorporate the temporal movement information of a hand in motion into the learning framework for better 3D hand pose estimation performance, which leads to the necessity of a large scale dataset with sequential RGB hand images.
We propose a novel method that generates a synthetic dataset that mimics natural human hand movements by re-engineering annotations of an extant static hand pose dataset into pose-flows. With the generated dataset, we train a newly proposed recurrent framework, exploiting visuo-temporal features from sequential images of synthetic hands in motion and emphasizing temporal smoothness of estimations with a temporal consistency constraint. Our novel training strategy of detaching the recurrent layer of the framework during domain finetuning from synthetic to real allows preservation of the visuo-temporal features learned from sequential synthetic hand images. Hand poses that are sequentially estimated consequently produce natural and smooth hand movements which lead to more robust estimations. We show that utilizing temporal information for
3D hand pose estimation significantly enhances general pose estimations by outperforming state-of-the-art methods in experiments on hand pose estimation benchmarks.
Since a fixed set of dataset provides a finite distribution of data samples, the generalization of a learning pose estimation network is limited in terms of pose, RGB and viewpoint spaces. We further propose to augment the data automatically such that the augmented pose sampling is performed in favor of training pose estimators generalization performance. Such auto-augmentation of poses is performed within a learning feature space in order to avoid computational burden of generating synthetic sample for every iteration of updates. The proposed
effort can be considered as generating and utilizing synthetic samples for network training in the feature space. This allows training efficiency by requiring less number of real data samples, enhanced generalization power over multiple dataset domains and estimation performance caused by efficient augmentation.2D μ΄λ―Έμ§μμ μ¬λμ μ λͺ¨μκ³Ό ν¬μ¦λ₯Ό μΈμνκ³ κ΅¬ννλ μ°κ΅¬λ κ° μκ°λ½ μ‘°μΈνΈλ€μ 3D μμΉλ₯Ό κ²μΆνλ κ²μ λͺ©νλ‘νλ€. μ ν¬μ¦λ μκ°λ½ μ‘°μΈνΈλ€λ‘ ꡬμ±λμ΄ μκ³ μλͺ© κ΄μ λΆν° MCP, PIP, DIP μ‘°μΈνΈλ€λ‘ μ¬λ μμ ꡬμ±νλ μ 체μ μμλ€μ μλ―Ένλ€. μ ν¬μ¦ μ 보λ λ€μν λΆμΌμμ νμ©λ μ μκ³ μ μ μ€μ³ κ°μ§ μ°κ΅¬ λΆμΌμμ μ ν¬μ¦ μ λ³΄κ° λ§€μ° νλ₯ν μ
λ ₯ νΉμ§ κ°μΌλ‘ μ¬μ©λλ€.
μ¬λμ μ ν¬μ¦ κ²μΆ μ°κ΅¬λ₯Ό μ€μ μμ€ν
μ μ μ©νκΈ° μν΄μλ λμ μ νλ, μ€μκ°μ±, λ€μν κΈ°κΈ°μ μ¬μ© κ°λ₯νλλ‘ κ°λ²Όμ΄ λͺ¨λΈμ΄ νμνκ³ , μ΄κ²μ κ°λ₯μΌ νκΈ° μν΄μ νμ΅ν μΈκ³΅μ κ²½λ§ λͺ¨λΈμ νμ΅νλλ°μλ λ§μ λ°μ΄ν°κ° νμλ‘ νλ€. νμ§λ§ μ¬λ μ ν¬μ¦λ₯Ό μΈ‘μ νλ κΈ°κ³λ€μ΄ κ½€ λΆμμ νκ³ , μ΄ κΈ°κ³λ€μ μ₯μ°©νκ³ μλ μ΄λ―Έμ§λ μ¬λ μ νΌλΆ μκ³Όλ λ§μ΄ λ¬λΌ νμ΅μ μ¬μ©νκΈ°κ° μ μ νμ§ μλ€. κ·Έλ¬κΈ° λλ¬Έμ λ³Έ λ
Όλ¬Έμμλ μ΄λ¬ν λ¬Έμ λ₯Ό ν΄κ²°νκΈ° μν΄ μΈκ³΅μ μΌλ‘ λ§λ€μ΄λΈ λ°μ΄ν°λ₯Ό μ¬κ°κ³΅ λ° μ¦λνμ¬ νμ΅μ μ¬μ©νκ³ , κ·Έκ²μ ν΅ν΄ λ μ’μ νμ΅μ±κ³Όλ₯Ό μ΄λ£¨λ €κ³ νλ€.
μΈκ³΅μ μΌλ‘ λ§λ€μ΄λΈ μ¬λ μ μ΄λ―Έμ§ λ°μ΄ν°λ€μ μ€μ μ¬λ μ νΌλΆμκ³Όλ λΉμ·ν μ§μΈμ λν
μΌν ν
μ€μ³κ° λ§μ΄ λ¬λΌ, μ€μ λ‘ μΈκ³΅ λ°μ΄ν°λ₯Ό νμ΅ν λͺ¨λΈμ μ€μ μ λ°μ΄ν°μμ μ±λ₯μ΄ νμ ν λ§μ΄ λ¨μ΄μ§λ€. μ΄ λ λ°μ΄νμ λλ©μΈμ μ€μ΄κΈ° μν΄μ 첫λ²μ§Έλ‘λ μ¬λμμ ꡬ쑰λ₯Ό λ¨Όμ νμ΅ μν€κΈ°μν΄, μ λͺ¨μ
μ μ¬κ°κ³΅νμ¬ κ·Έ μμ§μ ꡬ쑰λ₯Ό νμ€ν μκ°μ μ 보λ₯Ό λΊ λλ¨Έμ§λ§ μ€μ μ μ΄λ―Έμ§ λ°μ΄ν°μ νμ΅νμκ³ ν¬κ² ν¨κ³Όλ₯Ό λ΄μλ€.
μ΄λ μ€μ μ¬λ μλͺ¨μ
μ λͺ¨λ°©νλ λ°©λ²λ‘ μ μ μνμλ€.
λλ²μ§Έλ‘λ λ λλ©μΈμ΄ λ€λ₯Έ λ°μ΄ν°λ₯Ό λ€νΈμν¬ νΌμ³ 곡κ°μμ alignμμΌ°λ€. κ·ΈλΏλ§μλλΌ μΈκ³΅ ν¬μ¦λ₯Ό νΉμ λ°μ΄ν°λ€λ‘ augmentνμ§ μκ³ λ€νΈμν¬κ° λ§μ΄ λ³΄μ§ λͺ»ν ν¬μ¦κ° λ§λ€μ΄μ§λλ‘ νλμ νλ₯ λͺ¨λΈλ‘μ μ€μ νμ¬ κ·Έκ²μμ μνλ§νλ ꡬ쑰λ₯Ό μ μνμλ€.
λ³Έ λ
Όλ¬Έμμλ μΈκ³΅ λ°μ΄ν°λ₯Ό λ ν¨κ³Όμ μΌλ‘ μ¬μ©νμ¬ annotationμ΄ μ΄λ €μ΄ μ€μ λ°μ΄ν°λ₯Ό λ λͺ¨μΌλ μκ³ μ€λ¬μ μμ΄ μΈκ³΅ λ°μ΄ν°λ€μ λ ν¨κ³Όμ μΌλ‘ λ§λ€μ΄ λ΄λ κ² λΏλ§ μλλΌ, λ μμ νκ³ μ§μμ νΉμ§κ³Ό μκ°μ νΉμ§μ νμ©ν΄μ ν¬μ¦μ μ±λ₯μ κ°μ νλ λ°©λ²λ€μ μ μνλ€. λν, λ€νΈμν¬κ° μ€μ€λ‘ νμν λ°μ΄ν°λ₯Ό μ°Ύμμ νμ΅ν μ μλ μλ λ°μ΄ν° μ¦λ λ°©λ²λ‘ λ ν¨κ» μ μνμλ€. μ΄λ κ² μ μλ λ°©λ²μ κ²°ν©ν΄μ λ λμ μ ν¬μ¦μ μ±λ₯μ ν₯μ ν μ μλ€.1. Introduction 1
2. Related Works 14
3. Preliminaries: 3D Hand Mesh Model 27
4. SeqHAND: RGB-sequence-based 3D Hand Pose and Shape Estimation 31
5. Hand Pose Auto-Augment 66
6. Conclusion 85
Abstract (Korea) 101
κ°μ¬μ κΈ 103λ°
Detection of hand gestures with human computer recognition by using support vector machine
Many applications, such as interactive data analysis and sign detection, can benefit from hand gesture recognition. We offer a low-cost approach based on human-computer interaction for predicting hand movements in real time. Our technique involves using a color glove to train a random forest classifier and then predicting a naked hand at the pixel level. Our algorithm anticipates all pixels at a rate of around 3 frames per second and is unaffected by differences in the surroundings. It's also been proven that HCI-based data augmentation is more effective than any other way for enhancing interactive data. In addition, the augmentation experiment was carried out on multiple subsets of the original hand skeleton sequence dataset, each with a different number of classes, as well as on the entire dataset. On practically all subsets, the proposed base architecture improved classification accuracy. When the entire dataset was used, there was even a modest improvement. Correct identification could be regarded as a quality indicator. The best accuracy score was 94.02 percent for the HCI-model with support vector machine (SVM) classifier
- β¦