1,642 research outputs found

    3D Human Pose Estimation using Spatio-Temporal Networks with Explicit Occlusion Training

    Full text link
    Estimating 3D poses from a monocular video is still a challenging task, despite the significant progress that has been made in recent years. Generally, the performance of existing methods drops when the target person is too small/large, or the motion is too fast/slow relative to the scale and speed of the training data. Moreover, to our knowledge, many of these methods are not designed or trained under severe occlusion explicitly, making their performance on handling occlusion compromised. Addressing these problems, we introduce a spatio-temporal network for robust 3D human pose estimation. As humans in videos may appear in different scales and have various motion speeds, we apply multi-scale spatial features for 2D joints or keypoints prediction in each individual frame, and multi-stride temporal convolutional net-works (TCNs) to estimate 3D joints or keypoints. Furthermore, we design a spatio-temporal discriminator based on body structures as well as limb motions to assess whether the predicted pose forms a valid pose and a valid movement. During training, we explicitly mask out some keypoints to simulate various occlusion cases, from minor to severe occlusion, so that our network can learn better and becomes robust to various degrees of occlusion. As there are limited 3D ground-truth data, we further utilize 2D video data to inject a semi-supervised learning capability to our network. Experiments on public datasets validate the effectiveness of our method, and our ablation studies show the strengths of our network\'s individual submodules.Comment: 8 pages, AAAI 202

    ๋งˆ์Šคํฌ ์–ธ์–ด ๋ชจ๋ธ์„ ์ด์šฉํ•œ 3์ฐจ์› ์† ์ขŒํ‘œ์˜ ๋ฏธ์„ธ์กฐ์ •

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2021.8. ๋ฌธ๋ณ‘๋กœ.3D Hand Pose Estimation ๋ฌธ์ œ๋Š” ํ•œ ์žฅ์˜ 2์ฐจ์› image๋ฅผ ์ด์šฉํ•˜์—ฌ 3์ฐจ์›์˜ ์† ์ขŒํ‘œ๋ฅผ ์ถ”์ •ํ•˜๋Š” ๋ฌธ์ œ๋กœ, 2์ฐจ์› image์—์„œ ์†์˜ ์ผ๋ถ€๊ฐ€ ๊ฐ€๋ ค์ง€๋Š” ๊ฒฝ์šฐ๋“ค ๋•Œ๋ฌธ์— ํ˜„์กดํ•˜๋Š” ์ ‘๊ทผ๋ฐฉ์‹์œผ๋กœ ํ’€๊ธฐ์— ๊นŒ๋‹ค๋กœ์šด ๋ฌธ์ œ์ด๋‹ค. ์ตœ๊ทผ์— ์—ฐ๊ตฌ์ž๋“ค์€ ๋งŽ์€ ์–‘์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋ชจ์œผ๊ฑฐ๋‚˜, ํ•ฉ์„ฑํ•˜์—ฌ ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋ ค ํ–ˆ๋‹ค. ํ•˜์ง€๋งŒ, ์ด๋Ÿฌํ•œ ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ์ ‘๊ทผ๋“ค์€ ๊ฐ ๊ด€์ ˆ๋“ค์ด ๋…๋ฆฝ์ ์ด๋ผ ๊ฐ€์ •ํ•˜๊ณ  ๋ฌธ์ œ๋ฅผ ํ’€๊ฑฐ๋‚˜, ๋ฌผ๋ฆฌ์ ์œผ๋กœ ๋“œ๋Ÿฌ๋‚˜๋Š” ๊ด€์ ˆ๋“ค์˜ ์—ฐ๊ฒฐ ๊ด€๊ณ„๋งŒ ๊ฐ€์ง€๊ณ ์„œ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋ ค ํ–ˆ๊ธฐ๋•Œ๋ฌธ์— ์„ฑ๋Šฅ ํ–ฅ์ƒ์— ํ•œ๊ณ„๊ฐ€ ์žˆ์—ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ์™„ํ™”ํ•˜๊ธฐ ์œ„ํ•ด, ์† ๊ด€์ ˆ์˜ 3์ฐจ์› ์ขŒํ‘œ๋“ค ๊ฐ„์˜ ๊ด€๊ณ„๋ฅผ ํ•™์Šต์‹œํ‚ค๊ณ  ์ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ฏธ์„ธ์กฐ์ •์„ ํ•˜์—ฌ ์ „๋ฐ˜์ ์ธ ์„ฑ๋Šฅ์„ ๋Œ์–ด ์˜ฌ๋ฆฌ๋Š” ๋ฐฉ๋ฒ•์„ ์ด ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆ ํ•œ๋‹ค. ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ ๋ถ„์•ผ์—์„œ ๊ฐ€์žฅ ๋งŽ์ด ์“ฐ์ด๋Š” BERT ๋ชจ๋ธ์—์„œ ์˜๊ฐ์„ ๋ฐ›์•˜์œผ๋ฉฐ, BERT๋ฅผ ์ด์šฉํ•˜์—ฌ ๋ณด์ด์ง€ ์•Š๋Š” ์†์— ๋Œ€ํ•˜์—ฌ ์ž˜ ์ถ”์ •ํ•˜๋„๋ก ํ•˜๋Š” ๋ชจ๋“ˆ์„ ์ถ”๊ฐ€ํ•จ์œผ๋กœ์จ ๊ธฐ์กด์— ์žˆ๋˜ ์ ‘๊ทผ๋ฐฉ์‹๋“ค ๋ณด๋‹ค ๋” ์ข‹์€ ๊ฒฐ๊ณผ๋ฅผ ์‹คํ—˜์—์„œ ์–ป์„ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋˜ํ•œ, ๋ฌผ๋ฆฌ์ ์ธ ๊ด€์ ˆ๊ฐ„์˜ ์—ฐ๊ฒฐ ๊ด€๊ณ„์— ๊ฐ‡ํ˜€์žˆ์ง€ ์•Š๊ณ , ๋ชจ๋ธ์ด ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ ๊ฐ ๊ด€์ ˆ๊ฐ„์˜ ์˜ํ–ฅ๋ ฅ์„ ํŒŒ์•…ํ•˜์—ฌ ์œ„์น˜๋ฅผ ์„ธ๋ถ€ ์กฐ์ •ํ•˜๊ฒŒ ํ•˜์˜€๋‹ค. ์ด๋ ‡๊ฒŒ ํ•™์Šตํ•œ ์—ฐ๊ฒฐ ๊ด€๊ณ„๋ฅผ ์‹œ๊ฐํ™” ํ•˜์—ฌ ์ด ๋…ผ๋ฌธ์— ์ผ๋ถ€ ์†Œ๊ฐœํ•˜์˜€๊ณ , ์ด๋ฅผ ํ†ตํ•ด ๋ˆˆ์— ๋ณด์ด๋Š” ๋ฌผ๋ฆฌ์ ์ธ ์—ฐ๊ฒฐ๊ด€๊ณ„ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๊ด€๊ณ„์—†์–ด ๋ณด์ด๋Š” ๊ด€์ ˆ๋“ค ๊ฐ„์—๋„ ์˜ํ–ฅ์„ ์ฃผ๊ณ ๋ฐ›๊ณ  ์žˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•˜๋Š” ๊ฒƒ์ด ํ›จ์”ฌ ๋” ์ข‹์€ ์ ‘๊ทผ๋ฐฉ๋ฒ•์ž„์„ ๊ด€์ฐฐํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค.Accurately estimating hand/body pose from a single viewpoint under occlusion is challenging for most of the current approaches. Recent approaches have tried to address the occlusion problem by collecting or synthesizing images having joint occlusions. However, the data-driven approaches failed to tackle the occlusion because they assumed that joints are independent or they only used explicit joint connection. To mitigate this problem, I propose a method that learns joint relations and refines the occluded information based on their relation. Inspired by BERT in Natural Language Processing, I pre-train a refinement module and add it at the end of the proposed framework. Refinement improves not only the accuracy of occluded joints but also the accuracy of whole joints. In addition, instead of using a physical connection between joints, the proposed model learns their relation from the data. I visualized the learned joint relation in this paper, and it implies that assuming explicit connection hinders the model from accurately predicting joint locations accurately.1 INTRODUCTION 1 2 RELATEDWORKS 3 3 PRELIMINARIES 5 3.1 Attention Mechanism 5 3.2 Transformer 5 3.3 Masked Language Model 6 4 Method 9 4.1 Problem Definition 9 4.2 3D Hand Pose Estimation Framework 9 4.2.1 Dense Representation Module 10 4.2.2 3D Regression Module 10 4.2.3 Joint Refinement Module 11 4.3 Pre-training 11 4.3.1 Stacked HourGlass 12 4.3.2 Joint Refinement Module 13 4.4 Training 13 5 EXPERIMENTS 17 5.1 Dataset 17 5.2 Experimental Results 18 5.2.1 Quantative Results 18 5.2.2 Qualitative Results 18 5.2.3 Computational Complexity 19 6 CONCLUSION 26 Bibliography 27 Abstract (In Korean) 34์„

    A Grid-based Representation for Human Action Recognition

    Full text link
    Human action recognition (HAR) in videos is a fundamental research topic in computer vision. It consists mainly in understanding actions performed by humans based on a sequence of visual observations. In recent years, HAR have witnessed significant progress, especially with the emergence of deep learning models. However, most of existing approaches for action recognition rely on information that is not always relevant for this task, and are limited in the way they fuse the temporal information. In this paper, we propose a novel method for human action recognition that encodes efficiently the most discriminative appearance information of an action with explicit attention on representative pose features, into a new compact grid representation. Our GRAR (Grid-based Representation for Action Recognition) method is tested on several benchmark datasets demonstrating that our model can accurately recognize human actions, despite intra-class appearance variations and occlusion challenges.Comment: Accepted on 25th International Conference on Pattern Recognition (ICPR 2020
    • โ€ฆ
    corecore