1,086 research outputs found

    ๋™์˜์ƒ ์† ์‚ฌ๋žŒ ๋™์ž‘์˜ ๋ฌผ๋ฆฌ ๊ธฐ๋ฐ˜ ์žฌ๊ตฌ์„ฑ ๋ฐ ๋ถ„์„

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2021. 2. ์ด์ œํฌ.In computer graphics, simulating and analyzing human movement have been interesting research topics started since the 1960s. Still, simulating realistic human movements in a 3D virtual world is a challenging task in computer graphics. In general, motion capture techniques have been used. Although the motion capture data guarantees realistic result and high-quality data, there is lots of equipment required to capture motion, and the process is complicated. Recently, 3D human pose estimation techniques from the 2D video are remarkably developed. Researchers in computer graphics and computer vision have attempted to reconstruct the various human motions from video data. However, existing methods can not robustly estimate dynamic actions and not work on videos filmed with a moving camera. In this thesis, we propose methods to reconstruct dynamic human motions from in-the-wild videos and to control the motions. First, we developed a framework to reconstruct motion from videos using prior physics knowledge. For dynamic motions such as backspin, the poses estimated by a state-of-the-art method are incomplete and include unreliable root trajectory or lack intermediate poses. We designed a reward function using poses and hints extracted from videos in the deep reinforcement learning controller and learned a policy to simultaneously reconstruct motion and control a virtual character. Second, we simulated figure skating movements in video. Skating sequences consist of fast and dynamic movements on ice, hindering the acquisition of motion data. Thus, we extracted 3D key poses from a video to then successfully replicate several figure skating movements using trajectory optimization and a deep reinforcement learning controller. Third, we devised an algorithm for gait analysis through video of patients with movement disorders. After acquiring the patients joint positions from 2D video processed by a deep learning network, the 3D absolute coordinates were estimated, and gait parameters such as gait velocity, cadence, and step length were calculated. Additionally, we analyzed the optimization criteria of human walking by using a 3D musculoskeletal humanoid model and physics-based simulation. For two criteria, namely, the minimization of muscle activation and joint torque, we compared simulation data with real human data for analysis. To demonstrate the effectiveness of the first two research topics, we verified the reconstruction of dynamic human motions from 2D videos using physics-based simulations. For the last two research topics, we evaluated our results with real human data.์ปดํ“จํ„ฐ ๊ทธ๋ž˜ํ”ฝ์Šค์—์„œ ์ธ๊ฐ„์˜ ์›€์ง์ž„ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๋ฐ ๋ถ„์„์€ 1960 ๋…„๋Œ€๋ถ€ํ„ฐ ๋‹ค๋ฃจ์–ด์ง„ ํฅ๋ฏธ๋กœ์šด ์—ฐ๊ตฌ ์ฃผ์ œ์ด๋‹ค. ๋ช‡ ์‹ญ๋…„ ๋™์•ˆ ํ™œ๋ฐœํ•˜๊ฒŒ ์—ฐ๊ตฌ๋˜์–ด ์™”์Œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , 3์ฐจ์› ๊ฐ€์ƒ ๊ณต๊ฐ„ ์ƒ์—์„œ ์‚ฌ์‹ค์ ์ธ ์ธ๊ฐ„์˜ ์›€์ง์ž„์„ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•˜๋Š” ์—ฐ๊ตฌ๋Š” ์—ฌ์ „ํžˆ ์–ด๋ ต๊ณ  ๋„์ „์ ์ธ ์ฃผ์ œ์ด๋‹ค. ๊ทธ๋™์•ˆ ์‚ฌ๋žŒ์˜ ์›€์ง์ž„ ๋ฐ์ดํ„ฐ๋ฅผ ์–ป๊ธฐ ์œ„ํ•ด์„œ ๋ชจ์…˜ ์บก์ณ ๊ธฐ์ˆ ์ด ์‚ฌ์šฉ๋˜์–ด ์™”๋‹ค. ๋ชจ์…˜ ์บก์ฒ˜ ๋ฐ์ดํ„ฐ๋Š” ์‚ฌ์‹ค์ ์ธ ๊ฒฐ๊ณผ์™€ ๊ณ ํ’ˆ์งˆ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณด์žฅํ•˜์ง€๋งŒ ๋ชจ์…˜ ์บก์ณ๋ฅผ ํ•˜๊ธฐ ์œ„ํ•ด์„œ ํ•„์š”ํ•œ ์žฅ๋น„๋“ค์ด ๋งŽ๊ณ , ๊ทธ ๊ณผ์ •์ด ๋ณต์žกํ•˜๋‹ค. ์ตœ๊ทผ์— 2์ฐจ์› ์˜์ƒ์œผ๋กœ๋ถ€ํ„ฐ ์‚ฌ๋žŒ์˜ 3์ฐจ์› ์ž์„ธ๋ฅผ ์ถ”์ •ํ•˜๋Š” ์—ฐ๊ตฌ๋“ค์ด ๊ด„๋ชฉํ•  ๋งŒํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์—ฌ์ฃผ๊ณ  ์žˆ๋‹ค. ์ด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์ปดํ“จํ„ฐ ๊ทธ๋ž˜ํ”ฝ์Šค์™€ ์ปดํ“จํ„ฐ ๋น„์ ผ ๋ถ„์•ผ์˜ ์—ฐ๊ตฌ์ž๋“ค์€ ๋น„๋””์˜ค ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ ๋‹ค์–‘ํ•œ ์ธ๊ฐ„ ๋™์ž‘์„ ์žฌ๊ตฌ์„ฑํ•˜๋ ค๋Š” ์‹œ๋„๋ฅผ ํ•˜๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•๋“ค์€ ๋น ๋ฅด๊ณ  ๋‹ค์ด๋‚˜๋ฏนํ•œ ๋™์ž‘๋“ค์€ ์•ˆ์ •์ ์œผ๋กœ ์ถ”์ •ํ•˜์ง€ ๋ชปํ•˜๋ฉฐ ์›€์ง์ด๋Š” ์นด๋ฉ”๋ผ๋กœ ์ดฌ์˜ํ•œ ๋น„๋””์˜ค์— ๋Œ€ํ•ด์„œ๋Š” ์ž‘๋™ํ•˜์ง€ ์•Š๋Š”๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋น„๋””์˜ค๋กœ๋ถ€ํ„ฐ ์—ญ๋™์ ์ธ ์ธ๊ฐ„ ๋™์ž‘์„ ์žฌ๊ตฌ์„ฑํ•˜๊ณ  ๋™์ž‘์„ ์ œ์–ดํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋จผ์ € ์‚ฌ์ „ ๋ฌผ๋ฆฌํ•™ ์ง€์‹์„ ์‚ฌ์šฉํ•˜์—ฌ ๋น„๋””์˜ค์—์„œ ๋ชจ์…˜์„ ์žฌ๊ตฌ์„ฑํ•˜๋Š” ํ”„๋ ˆ์ž„ ์›Œํฌ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ๊ณต์ค‘์ œ๋น„์™€ ๊ฐ™์€ ์—ญ๋™์ ์ธ ๋™์ž‘๋“ค์— ๋Œ€ํ•ด์„œ ์ตœ์‹  ์—ฐ๊ตฌ ๋ฐฉ๋ฒ•์„ ๋™์›ํ•˜์—ฌ ์ถ”์ •๋œ ์ž์„ธ๋“ค์€ ์บ๋ฆญํ„ฐ์˜ ๊ถค์ ์„ ์‹ ๋ขฐํ•  ์ˆ˜ ์—†๊ฑฐ๋‚˜ ์ค‘๊ฐ„์— ์ž์„ธ ์ถ”์ •์— ์‹คํŒจํ•˜๋Š” ๋“ฑ ๋ถˆ์™„์ „ํ•˜๋‹ค. ์šฐ๋ฆฌ๋Š” ์‹ฌ์ธต๊ฐ•ํ™”ํ•™์Šต ์ œ์–ด๊ธฐ์—์„œ ์˜์ƒ์œผ๋กœ๋ถ€ํ„ฐ ์ถ”์ถœํ•œ ํฌ์ฆˆ์™€ ํžŒํŠธ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋ณด์ƒ ํ•จ์ˆ˜๋ฅผ ์„ค๊ณ„ํ•˜๊ณ  ๋ชจ์…˜ ์žฌ๊ตฌ์„ฑ๊ณผ ์บ๋ฆญํ„ฐ ์ œ์–ด๋ฅผ ๋™์‹œ์— ํ•˜๋Š” ์ •์ฑ…์„ ํ•™์Šตํ•˜์˜€๋‹ค. ๋‘˜ ์งธ, ๋น„๋””์˜ค์—์„œ ํ”ผ๊ฒจ ์Šค์ผ€์ดํŒ… ๊ธฐ์ˆ ์„ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•œ๋‹ค. ํ”ผ๊ฒจ ์Šค์ผ€์ดํŒ… ๊ธฐ์ˆ ๋“ค์€ ๋น™์ƒ์—์„œ ๋น ๋ฅด๊ณ  ์—ญ๋™์ ์ธ ์›€์ง์ž„์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์–ด ๋ชจ์…˜ ๋ฐ์ดํ„ฐ๋ฅผ ์–ป๊ธฐ๊ฐ€ ๊นŒ๋‹ค๋กญ๋‹ค. ๋น„๋””์˜ค์—์„œ 3์ฐจ์› ํ‚ค ํฌ์ฆˆ๋ฅผ ์ถ”์ถœํ•˜๊ณ  ๊ถค์  ์ตœ์ ํ™” ๋ฐ ์‹ฌ์ธต๊ฐ•ํ™”ํ•™์Šต ์ œ์–ด๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์—ฌ๋Ÿฌ ํ”ผ๊ฒจ ์Šค์ผ€์ดํŒ… ๊ธฐ์ˆ ์„ ์„ฑ๊ณต์ ์œผ๋กœ ์‹œ์—ฐํ•œ๋‹ค. ์…‹ ์งธ, ํŒŒํ‚จ์Šจ ๋ณ‘์ด๋‚˜ ๋‡Œ์„ฑ๋งˆ๋น„์™€ ๊ฐ™์€ ์งˆ๋ณ‘์œผ๋กœ ์ธํ•˜์—ฌ ์›€์ง์ž„ ์žฅ์• ๊ฐ€ ์žˆ๋Š” ํ™˜์ž์˜ ๋ณดํ–‰์„ ๋ถ„์„ํ•˜๊ธฐ ์œ„ํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. 2์ฐจ์› ๋น„๋””์˜ค๋กœ๋ถ€ํ„ฐ ๋”ฅ๋Ÿฌ๋‹์„ ์‚ฌ์šฉํ•œ ์ž์„ธ ์ถ”์ •๊ธฐ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ํ™˜์ž์˜ ๊ด€์ ˆ ์œ„์น˜๋ฅผ ์–ป์–ด๋‚ธ ๋‹ค์Œ, 3์ฐจ์› ์ ˆ๋Œ€ ์ขŒํ‘œ๋ฅผ ์–ป์–ด๋‚ด์–ด ์ด๋กœ๋ถ€ํ„ฐ ๋ณดํญ, ๋ณดํ–‰ ์†๋„์™€ ๊ฐ™์€ ๋ณดํ–‰ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๊ณ„์‚ฐํ•œ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๊ทผ๊ณจ๊ฒฉ ์ธ์ฒด ๋ชจ๋ธ๊ณผ ๋ฌผ๋ฆฌ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ์ด์šฉํ•˜์—ฌ ์ธ๊ฐ„ ๋ณดํ–‰์˜ ์ตœ์ ํ™” ๊ธฐ์ค€์— ๋Œ€ํ•ด ํƒ๊ตฌํ•œ๋‹ค. ๊ทผ์œก ํ™œ์„ฑ๋„ ์ตœ์†Œํ™”์™€ ๊ด€์ ˆ ๋Œ๋ฆผํž˜ ์ตœ์†Œํ™”, ๋‘ ๊ฐ€์ง€ ๊ธฐ์ค€์— ๋Œ€ํ•ด ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•œ ํ›„, ์‹ค์ œ ์‚ฌ๋žŒ ๋ฐ์ดํ„ฐ์™€ ๋น„๊ตํ•˜์—ฌ ๊ฒฐ๊ณผ๋ฅผ ๋ถ„์„ํ•œ๋‹ค. ์ฒ˜์Œ ๋‘ ๊ฐœ์˜ ์—ฐ๊ตฌ ์ฃผ์ œ์˜ ํšจ๊ณผ๋ฅผ ์ž…์ฆํ•˜๊ธฐ ์œ„ํ•ด, ๋ฌผ๋ฆฌ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด์ฐจ์› ๋น„๋””์˜ค๋กœ๋ถ€ํ„ฐ ์žฌ๊ตฌ์„ฑํ•œ ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ์—ญ๋™์ ์ธ ์‚ฌ๋žŒ์˜ ๋™์ž‘๋“ค์„ ์žฌํ˜„ํ•œ๋‹ค. ๋‚˜์ค‘ ๋‘ ๊ฐœ์˜ ์—ฐ๊ตฌ ์ฃผ์ œ๋Š” ์‚ฌ๋žŒ ๋ฐ์ดํ„ฐ์™€์˜ ๋น„๊ต ๋ถ„์„์„ ํ†ตํ•˜์—ฌ ํ‰๊ฐ€ํ•œ๋‹ค.1 Introduction 1 2 Background 9 2.1 Pose Estimation from 2D Video . . . . . . . . . . . . . . . . . . . . 9 2.2 Motion Reconstruction from Monocular Video . . . . . . . . . . . . 10 2.3 Physics-Based Character Simulation and Control . . . . . . . . . . . 12 2.4 Motion Reconstruction Leveraging Physics . . . . . . . . . . . . . . 13 2.5 Human Motion Control . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.5.1 Figure Skating Simulation . . . . . . . . . . . . . . . . . . . 16 2.6 Objective Gait Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.7 Optimization for Human Movement Simulation . . . . . . . . . . . . 17 2.7.1 Stability Criteria . . . . . . . . . . . . . . . . . . . . . . . . 18 3 Human Dynamics from Monocular Video with Dynamic Camera Movements 19 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3 Pose and Contact Estimation . . . . . . . . . . . . . . . . . . . . . . 21 3.4 Learning Human Dynamics . . . . . . . . . . . . . . . . . . . . . . . 24 3.4.1 Policy Learning . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.4.2 Network Training . . . . . . . . . . . . . . . . . . . . . . . . 28 3.4.3 Scene Estimator . . . . . . . . . . . . . . . . . . . . . . . . 29 3.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5.1 Video Clips . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5.2 Comparison of Contact Estimators . . . . . . . . . . . . . . . 33 3.5.3 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.5.4 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4 Figure Skating Simulation from Video 42 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.3 Skating Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.3.1 Non-holonomic Constraints . . . . . . . . . . . . . . . . . . 46 4.3.2 Relaxation of Non-holonomic Constraints . . . . . . . . . . . 47 4.4 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.5 Trajectory Optimization and Control . . . . . . . . . . . . . . . . . . 50 4.5.1 Trajectory Optimization . . . . . . . . . . . . . . . . . . . . 50 4.5.2 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5 Gait Analysis Using Pose Estimation Algorithm with 2D-video of Patients 61 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.2.1 Patients and video recording . . . . . . . . . . . . . . . . . . 63 5.2.2 Standard protocol approvals, registrations, and patient consents 66 5.2.3 3D Pose estimation from 2D video . . . . . . . . . . . . . . . 66 5.2.4 Gait parameter estimation . . . . . . . . . . . . . . . . . . . 67 5.2.5 Statistical analysis . . . . . . . . . . . . . . . . . . . . . . . 68 5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5.3.1 Validation of video-based analysis of the gait . . . . . . . . . 68 5.3.2 gait analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.4.1 Validation with the conventional sensor-based method . . . . 75 5.4.2 Analysis of gait and turning in TUG . . . . . . . . . . . . . . 75 5.4.3 Correlation with clinical parameters . . . . . . . . . . . . . . 76 5.4.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5.5 Supplementary Material . . . . . . . . . . . . . . . . . . . . . . . . . 77 6 Control Optimization of Human Walking 80 6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 6.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.2.1 Musculoskeletal model . . . . . . . . . . . . . . . . . . . . . 82 6.2.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.2.3 Control co-activation level . . . . . . . . . . . . . . . . . . . 83 6.2.4 Push-recovery experiment . . . . . . . . . . . . . . . . . . . 84 6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 7 Conclusion 90 7.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Docto

    Robust and Versatile Bipedal Jumping Control through Reinforcement Learning

    Full text link
    This work aims to push the limits of agility for bipedal robots by enabling a torque-controlled bipedal robot to perform robust and versatile dynamic jumps in the real world. We present a reinforcement learning framework for training a robot to accomplish a large variety of jumping tasks, such as jumping to different locations and directions. To improve performance on these challenging tasks, we develop a new policy structure that encodes the robot's long-term input/output (I/O) history while also providing direct access to a short-term I/O history. In order to train a versatile jumping policy, we utilize a multi-stage training scheme that includes different training stages for different objectives. After multi-stage training, the policy can be directly transferred to a real bipedal Cassie robot. Training on different tasks and exploring more diverse scenarios lead to highly robust policies that can exploit the diverse set of learned maneuvers to recover from perturbations or poor landings during real-world deployment. Such robustness in the proposed policy enables Cassie to succeed in completing a variety of challenging jump tasks in the real world, such as standing long jumps, jumping onto elevated platforms, and multi-axes jumps.Comment: Accepted in Robotics: Science and Systems 2023 (RSS 2023). The accompanying video is at https://youtu.be/aAPSZ2QFB-

    Training Physics-based Controllers for Articulated Characters with Deep Reinforcement Learning

    Get PDF
    In this thesis, two different applications are discussed for using machine learning techniques to train coordinated motion controllers in arbitrary characters in absence of motion capture data. The methods highlight the resourcefulness of physical simulations to generate synthetic and generic motion data that can be used to learn various targeted skills. First, we present an unsupervised method for learning loco-motion skills in virtual characters from a low dimensional latent space which captures the coordination between multiple joints. We use a technique called motor babble, wherein a character interacts with its environment by actuating its joints through uncoordinated, low-level (motor) excitation, resulting in a corpus of motion data from which a manifold latent space can be extracted. Using reinforcement learning, we then train the character to learn locomotion (such as walking or running) in the low-dimensional latent space instead of the full-dimensional joint action space. The thesis also presents an end-to-end automated framework for training physics-based characters to rhythmically dance to user-input songs. A generative adversarial network (GAN) architecture is proposed that learns to generate physically stable dance moves through repeated interactions with the environment. These moves are then used to construct a dance network that can be used for choreography. Using DRL, the character is then trained to perform these moves, without losing balance and rhythm, in the presence of physical forces such as gravity and friction

    Intelligent approaches in locomotion - a review

    Get PDF

    Reinforcement Learning Algorithms in Humanoid Robotics

    Get PDF

    Expressive movement generation with machine learning

    Get PDF
    Movement is an essential aspect of our lives. Not only do we move to interact with our physical environment, but we also express ourselves and communicate with others through our movements. In an increasingly computerized world where various technologies and devices surround us, our movements are essential parts of our interaction with and consumption of computational devices and artifacts. In this context, incorporating an understanding of our movements within the design of the technologies surrounding us can significantly improve our daily experiences. This need has given rise to the field of movement computing โ€“ developing computational models of movement that can perceive, manipulate, and generate movements. In this thesis, we contribute to the field of movement computing by building machine-learning-based solutions for automatic movement generation. In particular, we focus on using machine learning techniques and motion capture data to create controllable, generative movement models. We also contribute to the field by creating datasets, tools, and libraries that we have developed during our research. We start our research by reviewing the works on building automatic movement generation systems using machine learning techniques and motion capture data. Our review covers background topics such as high-level movement characterization, training data, features representation, machine learning models, and evaluation methods. Building on our literature review, we present WalkNet, an interactive agent walking movement controller based on neural networks. The expressivity of virtual, animated agents plays an essential role in their believability. Therefore, WalkNet integrates controlling the expressive qualities of movement with the goal-oriented behaviour of an animated virtual agent. It allows us to control the generation based on the valence and arousal levels of affect, the movementโ€™s walking direction, and the moverโ€™s movement signature in real-time. Following WalkNet, we look at controlling movement generation using more complex stimuli such as music represented by audio signals (i.e., non-symbolic music). Music-driven dance generation involves a highly non-linear mapping between temporally dense stimuli (i.e., the audio signal) and movements, which renders a more challenging modelling movement problem. To this end, we present GrooveNet, a real-time machine learning model for music-driven dance generation

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion
    • โ€ฆ
    corecore