475 research outputs found

    Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior

    Full text link
    Creating believable motions for various characters has long been a goal in computer graphics. Current learning-based motion synthesis methods depend on extensive motion datasets, which are often challenging, if not impossible, to obtain. On the other hand, pose data is more accessible, since static posed characters are easier to create and can even be extracted from images using recent advancements in computer vision. In this paper, we utilize this alternative data source and introduce a neural motion synthesis approach through retargeting. Our method generates plausible motions for characters that have only pose data by transferring motion from an existing motion capture dataset of another character, which can have drastically different skeletons. Our experiments show that our method effectively combines the motion features of the source character with the pose features of the target character, and performs robustly with small or noisy pose data sets, ranging from a few artist-created poses to noisy poses estimated directly from images. Additionally, a conducted user study indicated that a majority of participants found our retargeted motion to be more enjoyable to watch, more lifelike in appearance, and exhibiting fewer artifacts. Project page: https://cyanzhao42.github.io/pose2motionComment: Project page: https://cyanzhao42.github.io/pose2motio

    Multicontact Motion Retargeting Using Whole-Body Optimization of Full Kinematics and Sequential Force Equilibrium

    Get PDF
    This article presents a multicontact motion adaptation framework that enables teleoperation of high degree-of-freedom robots, such as quadrupeds and humanoids, for loco-manipulation tasks in multicontact settings. Our proposed algorithms optimize whole-body configurations and formulate the retargeting of multicontact motions as sequential quadratic programming, which is robust and stable near the edges of feasibility constraints. Our framework allows real-time operation of the robot and reduces cognitive load for the operator because infeasible commands are automatically adapted into physically stable and viable motions on the robot. The results in simulations with full dynamics demonstrated the effectiveness of teleoperating different legged robots interactively and generating rich multicontact movements. We evaluated the computational efficiency of the proposed algorithms, and further validated and analyzed multicontact loco-manipulation tasks on humanoid and quadruped robots by reaching, active pushing, and various traversal on uneven terrains

    ์‹ฌ์ธต ๊ฐ•ํ™”ํ•™์Šต์„ ์ด์šฉํ•œ ์‚ฌ๋žŒ์˜ ๋ชจ์…˜์„ ํ†ตํ•œ ์ดํ˜•์  ์บ๋ฆญํ„ฐ ์ œ์–ด๊ธฐ ๊ฐœ๋ฐœ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2022. 8. ์„œ์ง„์šฑ.์‚ฌ๋žŒ์˜ ๋ชจ์…˜์„ ์ด์šฉํ•œ ๋กœ๋ด‡ ์ปจํŠธ๋กค ์ธํ„ฐํŽ˜์ด์Šค๋Š” ์‚ฌ์šฉ์ž์˜ ์ง๊ด€๊ณผ ๋กœ๋ด‡์˜ ๋ชจํ„ฐ ๋Šฅ๋ ฅ์„ ํ•ฉํ•˜์—ฌ ์œ„ํ—˜ํ•œ ํ™˜๊ฒฝ์—์„œ ๋กœ๋ด‡์˜ ์œ ์—ฐํ•œ ์ž‘๋™์„ ๋งŒ๋“ค์–ด๋‚ธ๋‹ค. ํ•˜์ง€๋งŒ, ํœด๋จธ๋…ธ์ด๋“œ ์™ธ์˜ ์‚ฌ์กฑ๋ณดํ–‰ ๋กœ๋ด‡์ด๋‚˜ ์œก์กฑ๋ณดํ–‰ ๋กœ๋ด‡์„ ์œ„ํ•œ ๋ชจ์…˜ ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ๋””์ž์ธ ํ•˜๋Š” ๊ฒƒ์€ ์‰ฌ์šด์ผ์ด ์•„๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ์‚ฌ๋žŒ๊ณผ ๋กœ๋ด‡ ์‚ฌ์ด์˜ ํ˜•ํƒœ ์ฐจ์ด๋กœ ์˜ค๋Š” ๋‹ค์ด๋‚˜๋ฏน์Šค ์ฐจ์ด์™€ ์ œ์–ด ์ „๋žต์ด ํฌ๊ฒŒ ์ฐจ์ด๋‚˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์šฐ๋ฆฌ๋Š” ์‚ฌ๋žŒ ์‚ฌ์šฉ์ž๊ฐ€ ์›€์ง์ž„์„ ํ†ตํ•˜์—ฌ ์‚ฌ์กฑ๋ณดํ–‰ ๋กœ๋ด‡์—์„œ ๋ถ€๋“œ๋Ÿฝ๊ฒŒ ์—ฌ๋Ÿฌ ๊ณผ์ œ๋ฅผ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๊ฒŒ๋” ํ•˜๋Š” ์ƒˆ๋กœ์šด ๋ชจ์…˜ ์ œ์–ด ์‹œ์Šคํ…œ์„ ์ œ์•ˆํ•œ๋‹ค. ์šฐ๋ฆฌ๋Š” ์šฐ์„  ์บก์ณํ•œ ์‚ฌ๋žŒ์˜ ๋ชจ์…˜์„ ์ƒ์‘ํ•˜๋Š” ๋กœ๋ด‡์˜ ๋ชจ์…˜์œผ๋กœ ๋ฆฌํƒ€๊ฒŸ ์‹œํ‚จ๋‹ค. ์ด๋•Œ ์ƒ์‘ํ•˜๋Š” ๋กœ๋ด‡์˜ ๋ชจ์…˜์€ ์œ ์ €๊ฐ€ ์˜๋„ํ•œ ์˜๋ฏธ๋ฅผ ๋‚ดํฌํ•˜๊ฒŒ ๋˜๋ฉฐ, ์šฐ๋ฆฌ๋Š” ์ด๋ฅผ ์ง€๋„ํ•™์Šต ๋ฐฉ๋ฒ•๊ณผ ํ›„์ฒ˜๋ฆฌ ๊ธฐ์ˆ ์„ ์ด์šฉํ•˜์—ฌ ๊ฐ€๋Šฅ์ผ€ ํ•˜์˜€๋‹ค. ๊ทธ ๋’ค ์šฐ๋ฆฌ๋Š” ๋ชจ์…˜์„ ๋ชจ์‚ฌํ•˜๋Š” ํ•™์Šต์„ ์ปค๋ฆฌํ˜๋Ÿผ ํ•™์Šต๊ณผ ๋ณ‘ํ–‰ํ•˜์—ฌ ์ฃผ์–ด์ง„ ๋ฆฌํƒ€๊ฒŸ๋œ ์ฐธ์กฐ ๋ชจ์…˜์„ ๋”ฐ๋ผ๊ฐ€๋Š” ์ œ์–ด ์ •์ฑ…์„ ์ƒ์„ฑํ•˜์˜€๋‹ค. ์šฐ๋ฆฌ๋Š” "์ „๋ฌธ๊ฐ€ ์ง‘๋‹จ"์„ ํ•™์Šตํ•จ์œผ๋กœ ๋ชจ์…˜ ๋ฆฌํƒ€๊ฒŒํŒ… ๋ชจ๋“ˆ๊ณผ ๋ชจ์…˜ ๋ชจ์‚ฌ ๋ชจ๋“ˆ์˜ ์„ฑ๋Šฅ์„ ํฌ๊ฒŒ ์ฆ๊ฐ€์‹œ์ผฐ๋‹ค. ๊ฒฐ๊ณผ์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ, ์šฐ๋ฆฌ์˜ ์‹œ์Šคํ…œ์„ ์ด์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž๊ฐ€ ์‚ฌ์กฑ๋ณดํ–‰ ๋กœ๋ด‡์˜ ์„œ์žˆ๊ธฐ, ์•‰๊ธฐ, ๊ธฐ์šธ์ด๊ธฐ, ํŒ” ๋ป—๊ธฐ, ๊ฑท๊ธฐ, ๋Œ๊ธฐ์™€ ๊ฐ™์€ ๋‹ค์–‘ํ•œ ๋ชจํ„ฐ ๊ณผ์ œ๋“ค์„ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ํ™˜๊ฒฝ๊ณผ ํ˜„์‹ค์—์„œ ๋‘˜ ๋‹ค ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ์—ฐ๊ตฌ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ๋‹ค์–‘ํ•œ ๋ถ„์„์„ ํ•˜์˜€์œผ๋ฉฐ, ํŠนํžˆ ์šฐ๋ฆฌ ์‹œ์Šคํ…œ์˜ ๊ฐ๊ฐ์˜ ์š”์†Œ๋“ค์˜ ์ค‘์š”์„ฑ์„ ๋ณด์—ฌ์ค„ ์ˆ˜ ์žˆ๋Š” ์‹คํ—˜๋“ค์„ ์ง„ํ–‰ํ•˜์˜€๋‹ค.A human motion-based interface fuses operator intuitions with the motor capabilities of robots, enabling adaptable robot operations in dangerous environments. However, the challenge of designing a motion interface for non-humanoid robots, such as quadrupeds or hexapods, is emerged from the different morphology and dynamics of a human controller, leading to an ambiguity of control strategy. We propose a novel control framework that allows human operators to execute various motor skills on a quadrupedal robot by their motion. Our system first retargets the captured human motion into the corresponding robot motion with the operator's intended semantics. The supervised learning and post-processing techniques allow this retargeting skill which is ambiguity-free and suitable for control policy training. To enable a robot to track a given retargeted motion, we then obtain the control policy from reinforcement learning that imitates the given reference motion with designed curriculums. We additionally enhance the system's performance by introducing a set of experts. Finally, we randomize the domain parameters to adapt the physically simulated motor skills to real-world tasks. We demonstrate that a human operator can perform various motor tasks using our system including standing, tilting, manipulating, sitting, walking, and steering on both physically simulated and real quadruped robots. We also analyze the performance of each system component ablation study.1 Introduction 1 2 Related Work 5 2.1 Legged Robot Control 5 2.2 Motion Imitation 6 2.3 Motion-based Control 7 3 Overview 9 4 Motion Retargeting Module 11 4.1 Motion Retargeting Network 12 4.2 Post-processing for Consistency 14 4.3 A Set of Experts for Multi-task Support 15 5 Motion Imitation Module 17 5.1 Background: Reinforcement Learning 18 5.2 Formulation of Motion Imitation 18 5.3 Curriculum Learning over Tasks and Difficulties 21 5.4 Hierarchical Control with States 21 5.5 Domain Randomization 22 6 Results and Analysis 23 6.1 Experimental Setup 23 6.2 Motion Performance 24 6.3 Analysis 28 6.4 Comparison to Other Methods 31 7 Conclusion And Future Work 32 Bibliography 34 Abstract (In Korean) 44 ๊ฐ์‚ฌ์˜ ๊ธ€ 45์„

    Sketch-based skeleton-driven 2D animation and motion capture.

    Get PDF
    This research is concerned with the development of a set of novel sketch-based skeleton-driven 2D animation techniques, which allow the user to produce realistic 2D character animation efficiently. The technique consists of three parts: sketch-based skeleton-driven 2D animation production, 2D motion capture and a cartoon animation filter. For 2D animation production, the traditional way is drawing the key-frames by experienced animators manually. It is a laborious and time-consuming process. With the proposed techniques, the user only inputs one image ofa character and sketches a skeleton for each subsequent key-frame. The system then deforms the character according to the sketches and produces animation automatically. To perform 2D shape deformation, a variable-length needle model is developed, which divides the deformation into two stages: skeleton driven deformation and nonlinear deformation in joint areas. This approach preserves the local geometric features and global area during animation. Compared with existing 2D shape deformation algorithms, it reduces the computation complexity while still yielding plausible deformation results. To capture the motion of a character from exiting 2D image sequences, a 2D motion capture technique is presented. Since this technique is skeleton-driven, the motion of a 2D character is captured by tracking the joint positions. Using both geometric and visual features, this problem can be solved by ptimization, which prevents self-occlusion and feature disappearance. After tracking, the motion data are retargeted to a new character using the deformation algorithm proposed in the first part. This facilitates the reuse of the characteristics of motion contained in existing moving images, making the process of cartoon generation easy for artists and novices alike. Subsequent to the 2D animation production and motion capture,"Cartoon Animation Filter" is implemented and applied. Following the animation principles, this filter processes two types of cartoon input: a single frame of a cartoon character and motion capture data from an image sequence. It adds anticipation and follow-through to the motion with related squash and stretch effect

    Collaborative Bimanual Manipulation Using Optimal Motion Adaptation and Interaction Control Retargetting Human Commands to Feasible Robot Control References

    Get PDF
    This article presents a robust and reliable humanโ€“robot collaboration (HRC) framework for bimanual manipulation. We propose an optimal motion adaptation method to retarget arbitrary human commands to feasible robot pose references while maintaining payload stability. The framework comprises three modules: 1) a task-space sequential equilibrium and inverse kinematics optimization ( task-space SEIKO ) for retargeting human commands and enforcing feasibility constraints, 2) an admittance controller to facilitate compliant humanโ€“robot physical interactions, and 3) a low-level controller improving stability during physical interactions. Experimental results show that the proposed framework successfully adapted infeasible and dangerous human commands into continuous motions within safe boundaries and achieved stable grasping and maneuvering of large and heavy objects on a real dual-arm robot via teleoperation and physical interaction. Furthermore, the framework demonstrated the capability in the assembly task of building blocks and the insertion task of industrial power connectors

    A simple footskate removal method for virtual reality applications

    Get PDF
    Footskate is a common problem encountered in interactive applications dealing with virtual character animations. It has proven difficult to fix without the use of complex numerical methods, which require expert skills for their implementations, along with a fair amount of user interaction to correct a motion. On the other hand, deformable bodies are being increasingly used in virtual reality (VR) applications, allowing users to customize their avatar as they wish. This introduces the need of adapting motions without any help from a designer, as a random user seldom has the skills required to drive the existing algorithms towards the right solution. In this paper, we present a simple method to remove footskate artifacts in VR applications. Unlike previous algorithms, our approach does not rely on the skeletal animation to perform the correction but rather on the skin. This ensures that the final foot planting really matches the virtual character's motion. The changes are applied to the root joint of the skeleton only so that the resulting animation is as close as possible to the original one. Eventually, thanks to the simplicity of its formulation, it can be quickly and easily added to existing framework
    • โ€ฆ
    corecore