6 research outputs found
Robot at the Mirror: Learning to Imitate via Associating Self-supervised Models
We introduce an approach to building a custom model from ready-made
self-supervised models via their associating instead of training and
fine-tuning. We demonstrate it with an example of a humanoid robot looking at
the mirror and learning to detect the 3D pose of its own body from the image it
perceives. To build our model, we first obtain features from the visual input
and the postures of the robot's body via models prepared before the robot's
operation. Then, we map their corresponding latent spaces by a sample-efficient
robot's self-exploration at the mirror. In this way, the robot builds the
solicited 3D pose detector, which quality is immediately perfect on the
acquired samples instead of obtaining the quality gradually. The mapping, which
employs associating the pairs of feature vectors, is then implemented in the
same way as the key-value mechanism of the famous transformer models. Finally,
deploying our model for imitation to a simulated robot allows us to study, tune
up, and systematically evaluate its hyperparameters without the involvement of
the human counterpart, advancing our previous research.Comment: This work was funded (or co-funded) by the Horizon-Widera-2021
European Twinning project TERAIS G.A. n. 101079338, 32nd International
Conference on Artificial Neural Networks, Heraklion, Greece, September 26-29,
2023, citations:
https://link.springer.com/chapter/10.1007/978-3-031-44207-0_39, codes:
https://github.com/andylucny/learningImitation/tree/main/mirror, 12 pages, 3
figures, 0 table
ROBOT LEARNING OF OBJECT MANIPULATION TASK ACTIONS FROM HUMAN DEMONSTRATIONS
Robot learning from demonstration is a method which enables robots to learn in a similar way as humans. In this paper, a framework that enables robots to learn from multiple human demonstrations via kinesthetic teaching is presented. The subject of learning is a high-level sequence of actions, as well as the low-level trajectories necessary to be followed by the robot to perform the object manipulation task. The multiple human demonstrations are recorded and only the most similar demonstrations are selected for robot learning. The high-level learning module identifies the sequence of actions of the demonstrated task. Using Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM), the model of demonstrated trajectories is learned. The learned trajectory is generated by Gaussian mixture regression (GMR) from the learned Gaussian mixture model. In online working phase, the sequence of actions is identified and experimental results show that the robot performs the learned task successfully
A Generative Human-Robot Motion Retargeting Approach Using a Single RGBD Sensor
The goal of human-robot motion retargeting is to let a robot follow the movements performed by a human subject. Typically in previous approaches, the human poses are precomputed from a human pose tracking system, after which the explicit joint mapping strategies are specified to apply the estimated poses to a target robot. However, there is not any generic mapping strategy that we can use to map the human joint to robots with different kinds of configurations. In this paper, we present a novel motion retargeting approach that combines the human pose estimation and the motion retargeting procedure in a unified generative framework without relying on any explicit mapping. First, a 3D parametric human-robot (HUMROB) model is proposed which has the specific joint and stability configurations as the target robot while its shape conforms the source human subject. The robot configurations, including its skeleton proportions, joint limitations, and DoFs are enforced in the HUMROB model and get preserved during the tracking procedure. Using a single RGBD camera to monitor human pose, we use the raw RGB and depth sequence as input. The HUMROB model is deformed to fit the input point cloud, from which the joint angle of the model is calculated and applied to the target robots for retargeting. In this way, instead of fitted individually for each joint, we will get the joint angle of the robot fitted globally so that the surface of the deformed model is as consistent as possible to the input point cloud. In the end, no explicit or pre-defined joint mapping strategies are needed. To demonstrate its effectiveness for human-robot motion retargeting, the approach is tested under both simulations and on real robots which have a quite different skeleton configurations and joint degree of freedoms (DoFs) as compared with the source human subjects