214 research outputs found

    Adaptive Whole-Body Manipulation in Human-to-Humanoid Multi-Contact Motion Retargeting

    Get PDF
    Submitted to Humanoids 2017International audienceWe propose a multi-robot quadratic program (QP) controller for retargeting of a human's multi-contact loco-manipulation motions to a humanoid robot. Using this framework , the robot can track complex motions and automatically adapt to objects in the environment that have different physical properties from those that were used to provide the human's reference motion. The whole-body multi-contact manipulation problem is formulated as a multi-robot QP, which optimizes over the combined dynamics of the robot and any manipulated objects. The multi-robot QP maintains a dynamic partition of the robot's tracking links into fixed support contact, manipulation contact, and contact-free tracking links, which are re-partitioned and re-instantiated as constraints in the multi-robot QP every time a contact event occurs in the human motion. We present various experiments (bag retrieval, door opening, box lifting) using human motion data from an Xsens inertial motion capture system. We show in full-body dynamics simulation that the robot is able to perform difficult single-stance motions as well as multi-contact-stance motions (including hand supports), while adapting to objects of varying inertial properties

    ์‹ฌ์ธต ๊ฐ•ํ™”ํ•™์Šต์„ ์ด์šฉํ•œ ์‚ฌ๋žŒ์˜ ๋ชจ์…˜์„ ํ†ตํ•œ ์ดํ˜•์  ์บ๋ฆญํ„ฐ ์ œ์–ด๊ธฐ ๊ฐœ๋ฐœ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2022. 8. ์„œ์ง„์šฑ.์‚ฌ๋žŒ์˜ ๋ชจ์…˜์„ ์ด์šฉํ•œ ๋กœ๋ด‡ ์ปจํŠธ๋กค ์ธํ„ฐํŽ˜์ด์Šค๋Š” ์‚ฌ์šฉ์ž์˜ ์ง๊ด€๊ณผ ๋กœ๋ด‡์˜ ๋ชจํ„ฐ ๋Šฅ๋ ฅ์„ ํ•ฉํ•˜์—ฌ ์œ„ํ—˜ํ•œ ํ™˜๊ฒฝ์—์„œ ๋กœ๋ด‡์˜ ์œ ์—ฐํ•œ ์ž‘๋™์„ ๋งŒ๋“ค์–ด๋‚ธ๋‹ค. ํ•˜์ง€๋งŒ, ํœด๋จธ๋…ธ์ด๋“œ ์™ธ์˜ ์‚ฌ์กฑ๋ณดํ–‰ ๋กœ๋ด‡์ด๋‚˜ ์œก์กฑ๋ณดํ–‰ ๋กœ๋ด‡์„ ์œ„ํ•œ ๋ชจ์…˜ ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ๋””์ž์ธ ํ•˜๋Š” ๊ฒƒ์€ ์‰ฌ์šด์ผ์ด ์•„๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ์‚ฌ๋žŒ๊ณผ ๋กœ๋ด‡ ์‚ฌ์ด์˜ ํ˜•ํƒœ ์ฐจ์ด๋กœ ์˜ค๋Š” ๋‹ค์ด๋‚˜๋ฏน์Šค ์ฐจ์ด์™€ ์ œ์–ด ์ „๋žต์ด ํฌ๊ฒŒ ์ฐจ์ด๋‚˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์šฐ๋ฆฌ๋Š” ์‚ฌ๋žŒ ์‚ฌ์šฉ์ž๊ฐ€ ์›€์ง์ž„์„ ํ†ตํ•˜์—ฌ ์‚ฌ์กฑ๋ณดํ–‰ ๋กœ๋ด‡์—์„œ ๋ถ€๋“œ๋Ÿฝ๊ฒŒ ์—ฌ๋Ÿฌ ๊ณผ์ œ๋ฅผ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๊ฒŒ๋” ํ•˜๋Š” ์ƒˆ๋กœ์šด ๋ชจ์…˜ ์ œ์–ด ์‹œ์Šคํ…œ์„ ์ œ์•ˆํ•œ๋‹ค. ์šฐ๋ฆฌ๋Š” ์šฐ์„  ์บก์ณํ•œ ์‚ฌ๋žŒ์˜ ๋ชจ์…˜์„ ์ƒ์‘ํ•˜๋Š” ๋กœ๋ด‡์˜ ๋ชจ์…˜์œผ๋กœ ๋ฆฌํƒ€๊ฒŸ ์‹œํ‚จ๋‹ค. ์ด๋•Œ ์ƒ์‘ํ•˜๋Š” ๋กœ๋ด‡์˜ ๋ชจ์…˜์€ ์œ ์ €๊ฐ€ ์˜๋„ํ•œ ์˜๋ฏธ๋ฅผ ๋‚ดํฌํ•˜๊ฒŒ ๋˜๋ฉฐ, ์šฐ๋ฆฌ๋Š” ์ด๋ฅผ ์ง€๋„ํ•™์Šต ๋ฐฉ๋ฒ•๊ณผ ํ›„์ฒ˜๋ฆฌ ๊ธฐ์ˆ ์„ ์ด์šฉํ•˜์—ฌ ๊ฐ€๋Šฅ์ผ€ ํ•˜์˜€๋‹ค. ๊ทธ ๋’ค ์šฐ๋ฆฌ๋Š” ๋ชจ์…˜์„ ๋ชจ์‚ฌํ•˜๋Š” ํ•™์Šต์„ ์ปค๋ฆฌํ˜๋Ÿผ ํ•™์Šต๊ณผ ๋ณ‘ํ–‰ํ•˜์—ฌ ์ฃผ์–ด์ง„ ๋ฆฌํƒ€๊ฒŸ๋œ ์ฐธ์กฐ ๋ชจ์…˜์„ ๋”ฐ๋ผ๊ฐ€๋Š” ์ œ์–ด ์ •์ฑ…์„ ์ƒ์„ฑํ•˜์˜€๋‹ค. ์šฐ๋ฆฌ๋Š” "์ „๋ฌธ๊ฐ€ ์ง‘๋‹จ"์„ ํ•™์Šตํ•จ์œผ๋กœ ๋ชจ์…˜ ๋ฆฌํƒ€๊ฒŒํŒ… ๋ชจ๋“ˆ๊ณผ ๋ชจ์…˜ ๋ชจ์‚ฌ ๋ชจ๋“ˆ์˜ ์„ฑ๋Šฅ์„ ํฌ๊ฒŒ ์ฆ๊ฐ€์‹œ์ผฐ๋‹ค. ๊ฒฐ๊ณผ์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ, ์šฐ๋ฆฌ์˜ ์‹œ์Šคํ…œ์„ ์ด์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž๊ฐ€ ์‚ฌ์กฑ๋ณดํ–‰ ๋กœ๋ด‡์˜ ์„œ์žˆ๊ธฐ, ์•‰๊ธฐ, ๊ธฐ์šธ์ด๊ธฐ, ํŒ” ๋ป—๊ธฐ, ๊ฑท๊ธฐ, ๋Œ๊ธฐ์™€ ๊ฐ™์€ ๋‹ค์–‘ํ•œ ๋ชจํ„ฐ ๊ณผ์ œ๋“ค์„ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ํ™˜๊ฒฝ๊ณผ ํ˜„์‹ค์—์„œ ๋‘˜ ๋‹ค ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ์—ฐ๊ตฌ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ๋‹ค์–‘ํ•œ ๋ถ„์„์„ ํ•˜์˜€์œผ๋ฉฐ, ํŠนํžˆ ์šฐ๋ฆฌ ์‹œ์Šคํ…œ์˜ ๊ฐ๊ฐ์˜ ์š”์†Œ๋“ค์˜ ์ค‘์š”์„ฑ์„ ๋ณด์—ฌ์ค„ ์ˆ˜ ์žˆ๋Š” ์‹คํ—˜๋“ค์„ ์ง„ํ–‰ํ•˜์˜€๋‹ค.A human motion-based interface fuses operator intuitions with the motor capabilities of robots, enabling adaptable robot operations in dangerous environments. However, the challenge of designing a motion interface for non-humanoid robots, such as quadrupeds or hexapods, is emerged from the different morphology and dynamics of a human controller, leading to an ambiguity of control strategy. We propose a novel control framework that allows human operators to execute various motor skills on a quadrupedal robot by their motion. Our system first retargets the captured human motion into the corresponding robot motion with the operator's intended semantics. The supervised learning and post-processing techniques allow this retargeting skill which is ambiguity-free and suitable for control policy training. To enable a robot to track a given retargeted motion, we then obtain the control policy from reinforcement learning that imitates the given reference motion with designed curriculums. We additionally enhance the system's performance by introducing a set of experts. Finally, we randomize the domain parameters to adapt the physically simulated motor skills to real-world tasks. We demonstrate that a human operator can perform various motor tasks using our system including standing, tilting, manipulating, sitting, walking, and steering on both physically simulated and real quadruped robots. We also analyze the performance of each system component ablation study.1 Introduction 1 2 Related Work 5 2.1 Legged Robot Control 5 2.2 Motion Imitation 6 2.3 Motion-based Control 7 3 Overview 9 4 Motion Retargeting Module 11 4.1 Motion Retargeting Network 12 4.2 Post-processing for Consistency 14 4.3 A Set of Experts for Multi-task Support 15 5 Motion Imitation Module 17 5.1 Background: Reinforcement Learning 18 5.2 Formulation of Motion Imitation 18 5.3 Curriculum Learning over Tasks and Difficulties 21 5.4 Hierarchical Control with States 21 5.5 Domain Randomization 22 6 Results and Analysis 23 6.1 Experimental Setup 23 6.2 Motion Performance 24 6.3 Analysis 28 6.4 Comparison to Other Methods 31 7 Conclusion And Future Work 32 Bibliography 34 Abstract (In Korean) 44 ๊ฐ์‚ฌ์˜ ๊ธ€ 45์„

    Enabling Human-Robot Collaboration via Holistic Human Perception and Partner-Aware Control

    Get PDF
    As robotic technology advances, the barriers to the coexistence of humans and robots are slowly coming down. Application domains like elderly care, collaborative manufacturing, collaborative manipulation, etc., are considered the need of the hour, and progress in robotics holds the potential to address many societal challenges. The future socio-technical systems constitute of blended workforce with a symbiotic relationship between human and robot partners working collaboratively. This thesis attempts to address some of the research challenges in enabling human-robot collaboration. In particular, the challenge of a holistic perception of a human partner to continuously communicate his intentions and needs in real-time to a robot partner is crucial for the successful realization of a collaborative task. Towards that end, we present a holistic human perception framework for real-time monitoring of whole-body human motion and dynamics. On the other hand, the challenge of leveraging assistance from a human partner will lead to improved human-robot collaboration. In this direction, we attempt at methodically defining what constitutes assistance from a human partner and propose partner-aware robot control strategies to endow robots with the capacity to meaningfully engage in a collaborative task

    Multicontact Motion Retargeting Using Whole-Body Optimization of Full Kinematics and Sequential Force Equilibrium

    Get PDF
    This article presents a multicontact motion adaptation framework that enables teleoperation of high degree-of-freedom robots, such as quadrupeds and humanoids, for loco-manipulation tasks in multicontact settings. Our proposed algorithms optimize whole-body configurations and formulate the retargeting of multicontact motions as sequential quadratic programming, which is robust and stable near the edges of feasibility constraints. Our framework allows real-time operation of the robot and reduces cognitive load for the operator because infeasible commands are automatically adapted into physically stable and viable motions on the robot. The results in simulations with full dynamics demonstrated the effectiveness of teleoperating different legged robots interactively and generating rich multicontact movements. We evaluated the computational efficiency of the proposed algorithms, and further validated and analyzed multicontact loco-manipulation tasks on humanoid and quadruped robots by reaching, active pushing, and various traversal on uneven terrains

    Generating Humanoid Multi-Contact through Feasibility Visualization

    Full text link
    We present a feasibility-driven teleoperation framework designed to generate humanoid multi-contact maneuvers for use in unstructured environments. Our framework is designed for motions with arbitrary contact modes and postures. The operator configures a pre-execution preview robot through contact points and kinematic tasks. A fast estimation of the preview robot's quasi-static feasibility is performed by checking contact stability and collisions along an interpolated trajectory. A visualization of Center of Mass (CoM) stability margin, based on friction and actuation constraints, is displayed and can be previewed if the operator chooses to add or remove contacts. Contact points can be placed anywhere on a mesh approximation of the robot surface, enabling motions with knee or forearm contacts. We demonstrate our approach in simulation and hardware on a NASA Valkyrie humanoid, focusing on multi-contact trajectories which are challenging to generate autonomously or through alternative teleoperation approaches

    Unsupervised human-to-robot motion retargeting via expressive latent space

    Full text link
    This paper introduces a novel approach for human-to-robot motion retargeting, enabling robots to mimic human motion with precision while preserving the semantics of the motion. For that, we propose a deep learning method for direct translation from human to robot motion. Our method does not require annotated paired human-to-robot motion data, which reduces the effort when adopting new robots. To this end, we first propose a cross-domain similarity metric to compare the poses from different domains (i.e., human and robot). Then, our method achieves the construction of a shared latent space via contrastive learning and decodes latent representations to robot motion control commands. The learned latent space exhibits expressiveness as it captures the motions precisely and allows direct motion control in the latent space. We showcase how to generate in-between motion through simple linear interpolation in the latent space between two projected human poses. Additionally, we conducted a comprehensive evaluation of robot control using diverse modality inputs, such as texts, RGB videos, and key-poses, which enhances the ease of robot control to users of all backgrounds. Finally, we compare our model with existing works and quantitatively and qualitatively demonstrate the effectiveness of our approach, enhancing natural human-robot communication and fostering trust in integrating robots into daily life

    Methods to improve the coping capacities of whole-body controllers for humanoid robots

    Get PDF
    Current applications for humanoid robotics require autonomy in an environment specifically adapted to humans, and safe coexistence with people. Whole-body control is promising in this sense, having shown to successfully achieve locomotion and manipulation tasks. However, robustness remains an issue: whole-body controllers can still hardly cope with unexpected disturbances, with changes in working conditions, or with performing a variety of tasks, without human intervention. In this thesis, we explore how whole-body control approaches can be designed to address these issues. Based on whole-body control, contributions have been developed along three main axes: joint limit avoidance, automatic parameter tuning, and generalizing whole-body motions achieved by a controller. We first establish a whole-body torque-controller for the iCub, based on the stack-of-tasks approach and proposed feedback control laws in SE(3). From there, we develop a novel, theoretically guaranteed joint limit avoidance technique for torque-control, through a parametrization of the feasible joint space. This technique allows the robot to remain compliant, while resisting external perturbations that push joints closer to their limits, as demonstrated with experiments in simulation and with the real robot. Then, we focus on the issue of automatically tuning parameters of the controller, in order to improve its behavior across different situations. We show that our approach for learning task priorities, combining domain randomization and carefully selected fitness functions, allows the successful transfer of results between platforms subjected to different working conditions. Following these results, we then propose a controller which allows for generic, complex whole-body motions through real-time teleoperation. This approach is notably verified on the robot to follow generic movements of the teleoperator while in double support, as well as to follow the teleoperator\u2019s upper-body movements while walking with footsteps adapted from the teleoperator\u2019s footsteps. The approaches proposed in this thesis therefore improve the capability of whole-body controllers to cope with external disturbances, different working conditions and generic whole-body motions

    Learning-based methods for planning and control of humanoid robots

    Get PDF
    Nowadays, humans and robots are more and more likely to coexist as time goes by. The anthropomorphic nature of humanoid robots facilitates physical human-robot interaction, and makes social human-robot interaction more natural. Moreover, it makes humanoids ideal candidates for many applications related to tasks and environments designed for humans. No matter the application, an ubiquitous requirement for the humanoid is to possess proper locomotion skills. Despite long-lasting research, humanoid locomotion is still far from being a trivial task. A common approach to address humanoid locomotion consists in decomposing its complexity by means of a model-based hierarchical control architecture. To cope with computational constraints, simplified models for the humanoid are employed in some of the architectural layers. At the same time, the redundancy of the humanoid with respect to the locomotion task as well as the closeness of such a task to human locomotion suggest a data-driven approach to learn it directly from experience. This thesis investigates the application of learning-based techniques to planning and control of humanoid locomotion. In particular, both deep reinforcement learning and deep supervised learning are considered to address humanoid locomotion tasks in a crescendo of complexity. First, we employ deep reinforcement learning to study the spontaneous emergence of balancing and push recovery strategies for the humanoid, which represent essential prerequisites for more complex locomotion tasks. Then, by making use of motion capture data collected from human subjects, we employ deep supervised learning to shape the robot walking trajectories towards an improved human-likeness. The proposed approaches are validated on real and simulated humanoid robots. Specifically, on two versions of the iCub humanoid: iCub v2.7 and iCub v3

    Dynamic Mobile Manipulation via Whole-Body Bilateral Teleoperation of a Wheeled Humanoid

    Full text link
    Humanoid robots have the potential to help human workers by realizing physically demanding manipulation tasks such as moving large boxes within warehouses. We define such tasks as Dynamic Mobile Manipulation (DMM). This paper presents a framework for DMM via whole-body teleoperation, built upon three key contributions: Firstly, a teleoperation framework employing a Human Machine Interface (HMI) and a bi-wheeled humanoid, SATYRR, is proposed. Secondly, the study introduces a dynamic locomotion mapping, utilizing human-robot reduced order models, and a kinematic retargeting strategy for manipulation tasks. Additionally, the paper discusses the role of whole-body haptic feedback for wheeled humanoid control. Finally, the system's effectiveness and mappings for DMM are validated through locomanipulation experiments and heavy box pushing tasks. Here we show two forms of DMM: grasping a target moving at an average speed of 0.4 m/s, and pushing boxes weighing up to 105\% of the robot's weight. By simultaneously adjusting their pitch and using their arms, the pilot adjusts the robot pose to apply larger contact forces and move a heavy box at a constant velocity of 0.2 m/s
    • โ€ฆ
    corecore