170 research outputs found

    A Generative Human-Robot Motion Retargeting Approach Using a Single RGBD Sensor

    Get PDF
    The goal of human-robot motion retargeting is to let a robot follow the movements performed by a human subject. Typically in previous approaches, the human poses are precomputed from a human pose tracking system, after which the explicit joint mapping strategies are specified to apply the estimated poses to a target robot. However, there is not any generic mapping strategy that we can use to map the human joint to robots with different kinds of configurations. In this paper, we present a novel motion retargeting approach that combines the human pose estimation and the motion retargeting procedure in a unified generative framework without relying on any explicit mapping. First, a 3D parametric human-robot (HUMROB) model is proposed which has the specific joint and stability configurations as the target robot while its shape conforms the source human subject. The robot configurations, including its skeleton proportions, joint limitations, and DoFs are enforced in the HUMROB model and get preserved during the tracking procedure. Using a single RGBD camera to monitor human pose, we use the raw RGB and depth sequence as input. The HUMROB model is deformed to fit the input point cloud, from which the joint angle of the model is calculated and applied to the target robots for retargeting. In this way, instead of fitted individually for each joint, we will get the joint angle of the robot fitted globally so that the surface of the deformed model is as consistent as possible to the input point cloud. In the end, no explicit or pre-defined joint mapping strategies are needed. To demonstrate its effectiveness for human-robot motion retargeting, the approach is tested under both simulations and on real robots which have a quite different skeleton configurations and joint degree of freedoms (DoFs) as compared with the source human subjects

    Correspondence-free online human motion retargeting

    Get PDF
    We present a novel data-driven framework for unsupervised human motion retargeting which animates a target body shape with a source motion. This allows to retarget motions between different characters by animating a target subject with a motion of a source subject. Our method is correspondence-free, i.e. neither spatial correspondences between the source and target shapes nor temporal correspondences between different frames of the source motion are required. Our proposed method directly animates a target shape with arbitrary sequences of humans in motion, possibly captured using 4D acquisition platforms or consumer devices. Our framework takes into account longterm temporal context of 1 second during retargeting while accounting for surface details. To achieve this, we take inspiration from two lines of existing work: skeletal motion retargeting, which leverages long-term temporal context at the cost of surface detail, and surface-based retargeting, which preserves surface details without considering longterm temporal context. We unify the advantages of these works by combining a learnt skinning field with a skeletal retargeting approach. During inference, our method runs online, i.e. the input can be processed in a serial way, and retargeting is performed in a single forward pass per frame. Experiments show that including long-term temporal context during training improves the method's accuracy both in terms of the retargeted skeletal motion and the detail preservation. Furthermore, our method generalizes well on unobserved motions and body shapes. We demonstrate that the proposed framework achieves state-of-the-art results on two test datasets
    • …
    corecore