159 research outputs found

    ์‚ฌ๋žŒ์˜ ์ž์—ฐ์Šค๋Ÿฌ์šด ๋ณดํ–‰ ๋™์ž‘ ์ƒ์„ฑ์„ ์œ„ํ•œ ๋ฌผ๋ฆฌ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๊ธฐ๋ฐ˜ ํœด๋จธ๋…ธ์ด๋“œ ์ œ์–ด ๋ฐฉ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2014. 8. ์ด์ œํฌ.ํœด๋จธ๋…ธ์ด๋“œ๋ฅผ ์ œ์–ดํ•˜์—ฌ ์‚ฌ๋žŒ์˜ ์ž์—ฐ์Šค๋Ÿฌ์šด ์ด๋™ ๋™์ž‘์„ ๋งŒ๋“ค์–ด๋‚ด๋Š” ๊ฒƒ์€ ์ปดํ“จํ„ฐ๊ทธ๋ž˜ํ”ฝ์Šค ๋ฐ ๋กœ๋ด‡๊ณตํ•™ ๋ถ„์•ผ์—์„œ ์ค‘์š”ํ•œ ๋ฌธ์ œ๋กœ ์ƒ๊ฐ๋˜์–ด ์™”๋‹ค. ํ•˜์ง€๋งŒ, ์ด๋Š” ์‚ฌ๋žŒ์˜ ์ด๋™์—์„œ ๊ตฌ๋™๊ธฐ๊ฐ€ ๋ถ€์กฑํ•œ (underactuated) ํŠน์„ฑ๊ณผ ์‚ฌ๋žŒ์˜ ๋ชธ์˜ ๋ณต์žกํ•œ ๊ตฌ์กฐ๋ฅผ ๋ชจ๋ฐฉํ•˜๊ณ  ์‹œ๋ฎฌ๋ ˆ์ด์…˜ํ•ด์•ผ ํ•œ๋‹ค๋Š” ์  ๋•Œ๋ฌธ์— ๋งค์šฐ ์–ด๋ ค์šด ๋ฌธ์ œ๋กœ ์•Œ๋ ค์ ธ์™”๋‹ค. ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์€ ๋ฌผ๋ฆฌ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๊ธฐ๋ฐ˜ ํœด๋จธ๋…ธ์ด๋“œ๊ฐ€ ์™ธ๋ถ€์˜ ๋ณ€ํ™”์— ์•ˆ์ •์ ์œผ๋กœ ๋Œ€์‘ํ•˜๊ณ  ์‹ค์ œ ์‚ฌ๋žŒ์ฒ˜๋Ÿผ ์ž์—ฐ์Šค๋Ÿฝ๊ณ  ๋‹ค์–‘ํ•œ ์ด๋™ ๋™์ž‘์„ ๋งŒ๋“ค์–ด๋‚ด๋„๋ก ํ•˜๋Š” ์ œ์–ด ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์šฐ๋ฆฌ๋Š” ์‹ค์ œ ์‚ฌ๋žŒ์œผ๋กœ๋ถ€ํ„ฐ ์–ป์„ ์ˆ˜ ์žˆ๋Š” ๊ด€์ฐฐ ๊ฐ€๋Šฅํ•˜๊ณ  ์ธก์ • ๊ฐ€๋Šฅํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์ตœ๋Œ€ํ•œ์œผ๋กœ ํ™œ์šฉํ•˜์—ฌ ๋ฌธ์ œ์˜ ์–ด๋ ค์›€์„ ๊ทน๋ณตํ–ˆ๋‹ค. ์šฐ๋ฆฌ์˜ ์ ‘๊ทผ ๋ฐฉ๋ฒ•์€ ๋ชจ์…˜ ์บก์ฒ˜ ์‹œ์Šคํ…œ์œผ๋กœ๋ถ€ํ„ฐ ํš๋“ํ•œ ์‚ฌ๋žŒ์˜ ๋ชจ์…˜ ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜๋ฉฐ, ์‹ค์ œ ์‚ฌ๋žŒ์˜ ์ธก์ • ๊ฐ€๋Šฅํ•œ ๋ฌผ๋ฆฌ์ , ์ƒ๋ฆฌํ•™์  ํŠน์„ฑ์„ ๋ณต์›ํ•˜์—ฌ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ์šฐ๋ฆฌ๋Š” ํ† ํฌ๋กœ ๊ตฌ๋™๋˜๋Š” ์ด์กฑ ๋ณดํ–‰ ๋ชจ๋ธ์ด ๋‹ค์–‘ํ•œ ์Šคํƒ€์ผ๋กœ ๊ฑธ์„ ์ˆ˜ ์žˆ๋„๋ก ์ œ์–ดํ•˜๋Š” ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ์šฐ๋ฆฌ์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๋ชจ์…˜ ์บก์ฒ˜ ๋ฐ์ดํ„ฐ์— ๋‚ด์žฌ๋œ ์ด๋™ ๋™์ž‘ ์ž์ฒด์˜ ๊ฐ•๊ฑด์„ฑ์„ ํ™œ์šฉํ•˜์—ฌ ์‹ค์ œ ์‚ฌ๋žŒ๊ณผ ๊ฐ™์€ ์‚ฌ์‹ค์ ์ธ ์ด๋™ ์ œ์–ด๋ฅผ ๊ตฌํ˜„ํ•œ๋‹ค. ๊ตฌ์ฒด์ ์œผ๋กœ๋Š”, ์ฐธ์กฐ ๋ชจ์…˜ ๋ฐ์ดํ„ฐ๋ฅผ ์žฌํ˜„ํ•˜๋Š” ์ž์—ฐ์Šค๋Ÿฌ์šด ๋ณดํ–‰ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ์œ„ํ•œ ๊ด€์ ˆ ํ† ํฌ๋ฅผ ๊ณ„์‚ฐํ•˜๊ฒŒ ๋œ๋‹ค. ์•Œ๊ณ ๋ฆฌ์ฆ˜์—์„œ ๊ฐ€์žฅ ํ•ต์‹ฌ์ ์ธ ์•„์ด๋””์–ด๋Š” ๊ฐ„๋‹จํ•œ ์ถ”์ข… ์ œ์–ด๊ธฐ๋งŒ์œผ๋กœ๋„ ์ฐธ์กฐ ๋ชจ์…˜์„ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ๋„๋ก ์ฐธ์กฐ ๋ชจ์…˜์„ ์—ฐ์†์ ์œผ๋กœ ์กฐ์ ˆํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ์šฐ๋ฆฌ์˜ ๋ฐฉ๋ฒ•์€ ๋ชจ์…˜ ๋ธ”๋ Œ๋”ฉ, ๋ชจ์…˜ ์™€ํ•‘, ๋ชจ์…˜ ๊ทธ๋ž˜ํ”„์™€ ๊ฐ™์€ ๊ธฐ์กด์— ์กด์žฌํ•˜๋Š” ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ๊ธฐ๋ฒ•๋“ค์„ ์ด์กฑ ๋ณดํ–‰ ์ œ์–ด์— ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•œ๋‹ค. ์šฐ๋ฆฌ๋Š” ๋ณด๋‹ค ์‚ฌ์‹ค์ ์ธ ์ด๋™ ๋™์ž‘์„ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ๋žŒ์˜ ๋ชธ์„ ์„ธ๋ถ€์ ์œผ๋กœ ๋ชจ๋ธ๋งํ•œ, ๊ทผ์œก์— ์˜ํ•ด ๊ด€์ ˆ์ด ๊ตฌ๋™๋˜๋Š” ์ธ์ฒด ๋ชจ๋ธ์„ ์ œ์–ดํ•˜๋Š” ์ด๋™ ์ œ์–ด ์‹œ์Šคํ…œ์„ ์ œ์•ˆํ•œ๋‹ค. ์‹œ๋ฎฌ๋ ˆ์ด์…˜์— ์‚ฌ์šฉ๋˜๋Š” ํœด๋จธ๋…ธ์ด๋“œ๋Š” ์‹ค์ œ ์‚ฌ๋žŒ์˜ ๋ชธ์—์„œ ์ธก์ •๋œ ์ˆ˜์น˜๋“ค์— ๊ธฐ๋ฐ˜ํ•˜๊ณ  ์žˆ์œผ๋ฉฐ ์ตœ๋Œ€ 120๊ฐœ์˜ ๊ทผ์œก์„ ๊ฐ€์ง„๋‹ค. ์šฐ๋ฆฌ์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์ตœ์ ์˜ ๊ทผ์œก ํ™œ์„ฑํ™” ์ •๋„๋ฅผ ๊ณ„์‚ฐํ•˜์—ฌ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ์ˆ˜ํ–‰ํ•˜๋ฉฐ, ์ฐธ์กฐ ๋ชจ์…˜์„ ์ถฉ์‹คํžˆ ์žฌํ˜„ํ•˜๊ฑฐ๋‚˜ ํ˜น์€ ์ƒˆ๋กœ์šด ์ƒํ™ฉ์— ๋งž๊ฒŒ ๋ชจ์…˜์„ ์ ์‘์‹œํ‚ค๊ธฐ ์œ„ํ•ด ์ฃผ์–ด์ง„ ์ฐธ์กฐ ๋ชจ์…˜์„ ์ˆ˜์ •ํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ ๋™์ž‘ํ•œ๋‹ค. ์šฐ๋ฆฌ์˜ ํ™•์žฅ๊ฐ€๋Šฅํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๋‹ค์–‘ํ•œ ์ข…๋ฅ˜์˜ ๊ทผ๊ณจ๊ฒฉ ์ธ์ฒด ๋ชจ๋ธ์„ ์ตœ์ ์˜ ๊ทผ์œก ์กฐํ•ฉ์„ ์‚ฌ์šฉํ•˜๋ฉฐ ๊ท ํ˜•์„ ์œ ์ง€ํ•˜๋„๋ก ์ œ์–ดํ•  ์ˆ˜ ์žˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๋‹ค์–‘ํ•œ ์Šคํƒ€์ผ๋กœ ๊ฑท๊ธฐ ๋ฐ ๋‹ฌ๋ฆฌ๊ธฐ, ๋ชจ๋ธ์˜ ๋ณ€ํ™” (๊ทผ์œก์˜ ์•ฝํ™”, ๊ฒฝ์ง, ๊ด€์ ˆ์˜ ํƒˆ๊ตฌ), ํ™˜๊ฒฝ์˜ ๋ณ€ํ™” (์™ธ๋ ฅ), ๋ชฉ์ ์˜ ๋ณ€ํ™” (ํ†ต์ฆ์˜ ๊ฐ์†Œ, ํšจ์œจ์„ฑ์˜ ์ตœ๋Œ€ํ™”)์— ๋Œ€ํ•œ ๋Œ€์‘, ๋ฐฉํ–ฅ ์ „ํ™˜, ํšŒ์ „, ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒํ•˜๊ฒŒ ๋ฐฉํ–ฅ์„ ๋ฐ”๊พธ๋ฉฐ ๊ฑท๊ธฐ ๋“ฑ๊ณผ ๊ฐ™์€ ๋ณด๋‹ค ๋‚œ์ด๋„ ๋†’์€ ๋™์ž‘๋“ค๋กœ ์ด๋ฃจ์–ด์ง„ ์˜ˆ์ œ๋ฅผ ํ†ตํ•ด ์šฐ๋ฆฌ์˜ ์ ‘๊ทผ ๋ฐฉ๋ฒ•์ด ํšจ์œจ์ ์ž„์„ ๋ณด์˜€๋‹ค.Controlling artificial humanoids to generate realistic human locomotion has been considered as an important problem in computer graphics and robotics. However, it has been known to be very difficult because of the underactuated characteristics of the locomotion dynamics and the complex human body structure to be imitated and simulated. In this thesis, we presents controllers for physically simulated humanoids that exhibit a rich set of human-like and resilient simulated locomotion. Our approach exploits observable and measurable data of a human to effectively overcome difficulties of the problem. More specifically, our approach utilizes observed human motion data collected by motion capture systems and reconstructs measured physical and physiological properties of a human body. We propose a data-driven algorithm to control torque-actuated biped models to walk in a wide range of locomotion skills. Our algorithm uses human motion capture data and realizes an human-like locomotion control facilitated by inherent robustness of the locomotion motion. Concretely, it takes reference motion and generates a set of joint torques to generate human-like walking simulation. The idea is continuously modulating the reference motion such that even a simple tracking controller can reproduce the reference motion. A number of existing data-driven techniques such as motion blending, motion warping, and motion graph can facilitate the biped control with this framework. We present a locomotion control system that controls detailed models of a human body with the musculotendon actuating process to create more human-like simulated locomotion. The simulated humanoids are based on measured properties of a human body and contain maximum 120 muscles. Our algorithm computes the optimal coordination of muscle activations and actively modulates the reference motion to fathifully reproduce the reference motion or adapt the motion to meet new conditions. Our scalable algorithm can control various types of musculoskeletal humanoids while seeking harmonious coordination of many muscles and maintaining balance. We demonstrate the strength of our approach with examples that allow simulated humanoids to walk and run in various styles, adapt to change of models (e.g., muscle weakness, tightness, joint dislocation), environments (e.g., external pushes), goals (e.g., pain reduction and efficiency maximization), and perform more challenging locomotion tasks such as turn, spin, and walking while steering its direction interactively.Contents Abstract i Contents iii List of Figures v 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.1 Computer Graphics Perspective . . . . . . . . . . . . . . . . . 3 1.1.2 Robotics Perspective . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.3 Biomechanics Perspective . . . . . . . . . . . . . . . . . . . . 7 1.2 Aim of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.5 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2 Previous Work 16 2.1 Biped Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1.1 Controllers with Optimization . . . . . . . . . . . . . . . . . . 18 2.1.2 Controllers with Motion Capture Data . . . . . . . . . . . . . 20 2.2 Simulation of Musculoskeletal Humanoids . . . . . . . . . . . . . . . 21 2.2.1 Simulation of Specic Body Parts . . . . . . . . . . . . . . . . 21 2.2.2 Simulation of Full-Body Models . . . . . . . . . . . . . . . . . 22 2.2.3 Controllers for Musculoskeletal Humanoids . . . . . . . . . . . 23 3 Data-Driven Biped Control 24 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.3 Data-Driven Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.3.1 Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.3.2 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4 Locomotion Control for Many-Muscle Humanoids 56 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2 Humanoid Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.2.1 Muscle Force Generation . . . . . . . . . . . . . . . . . . . . . 61 4.2.2 Muscle Force Transfer . . . . . . . . . . . . . . . . . . . . . . 64 4.2.3 Equation of Motion . . . . . . . . . . . . . . . . . . . . . . . . 66 4.3 Muscle Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.3.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.3.2 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.3.3 Quadratic Programming Formulation . . . . . . . . . . . . . . 70 4.4 Trajectory Optimization . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5 Conclusion 84 A Mathematical Definitions 88 A.1 Definitions of Transition Function . . . . . . . . . . . . . . . . . . . . 88 B Humanoid Models 89 B.1 Torque-Actuated Biped Models . . . . . . . . . . . . . . . . . . . . . 89 B.2 Many-Muscle Humanoid Models . . . . . . . . . . . . . . . . . . . . . 91 C Dynamics of Musculotendon Actuators 94 C.1 Contraction Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 94 C.2 Initial Muscle States . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Glossary for Medical Terms 99 Bibliography 102 ์ดˆ๋ก 113Docto

    A survey on human performance capture and animation

    Get PDF
    With the rapid development of computing technology, three-dimensional (3D) human body models and their dynamic motions are widely used in the digital entertainment industry. Human perfor- mance mainly involves human body shapes and motions. Key research problems include how to capture and analyze static geometric appearance and dynamic movement of human bodies, and how to simulate human body motions with physical e๏ฟฝects. In this survey, according to main research directions of human body performance capture and animation, we summarize recent advances in key research topics, namely human body surface reconstruction, motion capture and synthesis, as well as physics-based motion sim- ulation, and further discuss future research problems and directions. We hope this will be helpful for readers to have a comprehensive understanding of human performance capture and animatio

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion

    Biologically-inspired control framework for insect animation.

    Get PDF
    Insects are common in our world, such as ants, spiders, cockroaches etc. Virtual representations of them have wide applications in Virtual Reality (VR), video games and films. Compared with the large volume of works in biped animation, the problem of insect animation was less explored. Their small body parts, complex structures and high-speed movements challenge the standard techniques of motion synthesis. This thesis addressed the aforementioned challenge by presenting a framework to efficiently automate the modelling and authoring of insect locomotion. This framework is inspired by two key observations of real insects: fixed gait pattern and distributed neural system. At the top level, a Triangle Placement Engine (TPE) is modelled based on the double-tripod gait pattern of insects, and determines the location and orientation of insect foot contacts, given various user inputs. At the low level, a Central Pattern Generator (CPG) controller actuates individual joints by mimicking the distributed neural system of insects. A Controller Look-Up Table (CLUT) translates the high-level commands from the TPE into the low-level control parameters of the CPG. In addition, a novel strategy is introduced to determine when legs start to swing. During high-speed movements, the swing mode is triggered when the Centre of Mass (COM) steps outside the Supporting Triangle. However, this simplified mechanism is not sufficient to produce the gait variations when insects are moving at slow speed. The proposed strategy handles the case of slow speed by considering four independent factors, including the relative distance to the extreme poses, the stance period, the relative distance to the neighbouring legs, the load information etc. This strategy is able to avoid the issues of collisions between legs or over stretching of leg joints, which are produced by conventional methods. The framework developed in this thesis allows sufficient control and seamlessly fits into the existing pipeline of animation production. With this framework, animators can model the motion of a single insect in an intuitive way by specifying the walking path, terrains, speed etc. The success of this framework proves that the introduction of biological components could synthesise the insect animation in a naturalness and interactive fashion

    Physics-based character locomotion control with large simulation time steps.

    Get PDF
    Physical simulated locomotion allows rich and varied interactions with environments and other characters. However, control is di cult due to factors such as a typical character's numerous degrees of freedom and small stability region, discontinuous ground contacts, and indirect control over the centre of mass. Previous academic work has made signi cant progress in addressing these problems, but typically uses simulation time steps much smaller than those suitable for games. This project deals with developing control strategies using larger time steps. After describing some introductory work showing the di culties of implementing a handcrafted controller with large physics time steps, three major areas of work are discussed. The rst area uses trajectory optimization to minimally alter reference motions to ensure physical validity, in order to improve simulated tracking. The approach builds on previous work which allows ground contacts to be modi ed as part of the optimization process, extending it to 3D problems. Incorporating contacts introduces di cult complementarity constraints, and an exact penalty method is shown here to improve solver robustness and performance compared to previous relaxation methods. Trajectory optimization is also used to modify reference motions to alter characteristics such as timing, stride length and heading direction, whilst maintaining physical validity, and to generate short transitions between existing motions. The second area uses a sampling-based approach, previously demonstrated with small time steps, to formulate open loop control policies which reproduce reference motions. As a prerequisite, the reproducibility of simulation output from a common game physics engine, PhysX, is examined and conditions leading to highly reproducible behaviour are determined. For large time steps, sampling is shown to be susceptible to physical inva- lidities in the reference motion but, using physically optimized motions, is successfully applied at 60 time steps per second. Finally, adaptations to an existing method using evolutionary algorithms to learn feedback policies are described. With large time steps, it is found to be necessary to use a dense feedback formulation and to introduce phase-dependence in order to obtain a successful controller, which is able to recover from impulses of several hundred Newtons applied for 0.1s. Additionally, it is shown that a recent machine learning approach based on support vector machines can identify whether disturbed character states will lead to failure, with high accuracy (99%) and with prediction times in the order of microseconds. Together, the trajectory optimization, open loop control, and feedback developments allow successful control for a walking motion at 60 time steps per second, with control and simulation time of 0.62ms per time step. This means that it could plausibly be used within the demanding performance constraints of games. Furthermore, the availability of rapid failure prediction for the controller will allow more high level control strategies to be explored in future

    Developing agile motor skills on virtual and real humanoids

    Get PDF
    Demonstrating strength and agility on virtual and real humanoids has been an important goal in computer graphics and robotics. However, developing physics- based controllers for various agile motor skills requires a tremendous amount of prior knowledge and manual labor due to complex mechanisms of the motor skills. The focus of the dissertation is to develop a set of computational tools to expedite the design process of physics-based controllers that can execute a variety of agile motor skills on virtual and real humanoids. Instead of designing directly controllers real humanoids, this dissertation takes an approach that develops appropriate theories and models in virtual simulation and systematically transfers the solutions to hardware systems. The algorithms and frameworks in this dissertation span various topics from spe- cific physics-based controllers to general learning frameworks. We first present an online algorithm for controlling falling and landing motions of virtual characters. The proposed algorithm is effective and efficient enough to generate falling motions for a wide range of arbitrary initial conditions in real-time. Next, we present a robust falling strategy for real humanoids that can manage a wide range of perturbations by planning the optimal contact sequences. We then introduce an iterative learning framework to easily design various agile motions, which is inspired by human learn- ing techniques. The proposed framework is followed by novel algorithms to efficiently optimize control parameters for the target tasks, especially when they have many constraints or parameterized goals. Finally, we introduce an iterative approach for exporting simulation-optimized control policies to hardware of robots to reduce the number of hardware experiments, that accompany expensive costs and labors.Ph.D

    Implementation of a robot platform to study bipedal walking

    Get PDF
    On this project, a modi cation of an open source, 3D printed robot, was implemented, with the purpose to create a more a ordable bipedal platform proper for studying Bipedal Walking algorithms. The original robot is a part of an open-source platform, called Poppy, that is formed from an interdisciplinary community of beginners and experts. One of the robots of this platform, is the Poppy Humanoid. The rigid parts of the Poppy Humanoid (as well as the rest of the Poppy platform robots) are 3D printed, a key factor of lowering the cost of a robot. The actuators used though, are expensive commercial DC-motors that increase the total cost of the robot drastically. This high cost of the actuators of Poppy, led this project to modify cheaper actuators while maintaining the same performance of their predecessors. Taking apart the components of the cheaper actuator, only the motor, the gears and the case that host them were kept, and a new design was made to control the motor and to meet the requirements set from the commercial motors. This new design of the actuator include a 12-bit resolution magnetic encoder to read the position of the shaft of the motor, a driver to run the motor, and also an embedded Arduino micro-controller. This feature of an Arduino as part of the actuator, gives the advantage over the commercial motor, as the user has the freedom to upload his own codes and to implement his own motor controllers. The result is a fully programmable actuator hosted on the same motor case. The size of this actuator though, is di erent from the commercial one. In order to mount the new actuators to the platform, Joan Guasch designed proper 3D printed parts. Apart of these parts, Joan also modi ed the leg design, in order to add another joint on the ankle (roll) as this Degree of Freedom (DoF) is important for Bipedal Walking algorithms and was missing from the original Poppy Humanoid leg design. The modi ed robot, is called Poppy-UPC and is a 12 DoF biped platform. For the communication between the motors and the main computer unit, a serial communication protocol was implemented based to the RS-485 standard. Multiple receivers (motors and sensors) can be connected to such a network in a linear, multi-drop con guration. The main computer unit of Poppy-UPC is an Odroid-C1 board. Essentially, this board is a Quad-core Linux computer fully compatible to run ROS. Odroid is acting as the master of the network and is gathering all the informations of the connected nodes, in order to publish them in ROS-topics. That way, the Poppy-UPC is connected to the ROS environment and ROS packages can be used for any further implementation with this platform. Finally, following the open-source spirit of the Poppy platform, all the codes and information are available at https://github.com/dimitris-zervas
    • โ€ฆ
    corecore