113 research outputs found

    Prediction of Intention during Interaction with iCub with Probabilistic Movement Primitives

    Get PDF
    This article describes our open-source software for predicting the intention of a user physically interacting with the humanoid robot iCub. Our goal is to allow the robot to infer the intention of the human partner during collaboration, by predicting the future intended trajectory: this capability is critical to design anticipatory behaviors that are crucial in human–robot collaborative scenarios, such as in co-manipulation, cooperative assembly, or transportation. We propose an approach to endow the iCub with basic capabilities of intention recognition, based on Probabilistic Movement Primitives (ProMPs), a versatile method for representing, generalizing, and reproducing complex motor skills. The robot learns a set of motion primitives from several demonstrations, provided by the human via physical interaction. During training, we model the collaborative scenario using human demonstrations. During the reproduction of the collaborative task, we use the acquired knowledge to recognize the intention of the human partner. Using a few early observations of the state of the robot, we can not only infer the intention of the partner but also complete the movement, even if the user breaks the physical interaction with the robot. We evaluate our approach in simulation and on the real iCub. In simulation, the iCub is driven by the user using the Geomagic Touch haptic device. In the real robot experiment, we directly interact with the iCub by grabbing and manually guiding the robot’s arm. We realize two experiments on the real robot: one with simple reaching trajectories, and one inspired by collaborative object sorting. The software implementing our approach is open source and available on the GitHub platform. In addition, we provide tutorials and videos

    Prediction of Intention during Interaction with iCub with Probabilistic Movement Primitives

    Get PDF
    International audienceThis paper describes our open-source software for predicting the intention of a user physically interacting with the humanoid robot iCub. Our goal is to allow the robot to infer the intention of the human partner during collaboration, by predicting the future intended trajectory: this capability is critical to design anticipatory behaviors that are crucial in human-robot collaborative scenarios, such as in co-manipulation, cooperative assembly or transportation. We propose an approach to endow the iCub with basic capabilities of intention recognition, based on Probabilistic Movement Primitives (ProMPs), a versatile method for representing, generalizing, and reproducing complex motor skills. The robot learns a set of motion primitives from several demonstrations, provided by the human via physical interaction. During training, we model the collaborative scenario using human demonstrations. During the reproduction of the collaborative task, we use the acquired knowledge to recognize the intention of the human partner. Using a few early observations of the state of the robot, we can not only infer the intention of the partner, but also complete the movement, even if the user breaks the physical interaction with the robot. We evaluate our approach in simulation and on the real iCub. In simulation, the iCub is driven by the user using the Geomagic Touch haptic device. In the real robot experiment, we directly interact with the iCub by grabbing and manually guiding the robot's arm. We realize two experiments on the real robot: one with simple reaching trajectories, and one inspired by collaborative object sorting. The software implementing our approach is open-source and available on the GitHub platform. Additionally, we provide tutorials and videos

    Prédiction multi-modale à l'aide d'apprentissage PRObabiliste de Mouvement Primitives

    Get PDF
    International audienceThis paper proposes a method for multi-modal prediction of intention based on a probabilistic description of movement primitives and goals. We target dyadic interaction between a human and a robot in a collaborative scenario. The robot acquires multi-modal models of collaborative action primitives containing gaze cues from the human partner and kinetic information about the manipulation primitives of its arm. We show that if the partner guides the robot with the gaze cue, the robot recognizes the intended action primitive even in the case of ambiguous actions. Furthermore, this prior knowledge acquired by gaze greatly improves the prediction of the future intended trajectory during a physical interaction. Results with the humanoid iCub are presented and discussed.Dans ce papier, nous proposons une mĂ©thode de prĂ©diction multi-modale de l'intention basĂ© sur une description probabiliste de primitives de mouvements et de buts. On s'interesse ici Ă  un scĂ©nario d'interaction collaborative entre un humain et un robot. Le robot modelise l'action collaborative de maniĂšre multi-modale, Ă  l'aide de primitives contenant des informations visuelles (orientation du regard du partenaire) ainsi que des informations sur la dynamique de ses propre bras. Nous montrons dans cette Ă©tude que si le partenaire guide le robot en utilisant son regard, le robot reconnait l'action attendu par le partenaire et ce, mĂȘme dans le cas oĂč les mouvements sont ambigus. Nous montrons aussi qu'en guidant le dĂ©but du mouvement du robot physiquement , le robot peut mĂȘme afiner sa trajectoire pour respecter encore mieux la volontĂ© de son partenaire. Finalement, en utilisant les deux modalitĂ©s, le robot peut utiliser l'information visuelle comme un a-priori sur l'action a effectuer, ce qui permet d'amĂ©liorer la reconnaissance de la trajectoire attendue lors d'interaction physique

    TOWARDS THE GROUNDING OF ABSTRACT CATEGORIES IN COGNITIVE ROBOTS

    Get PDF
    The grounding of language in humanoid robots is a fundamental problem, especially in social scenarios which involve the interaction of robots with human beings. Indeed, natural language represents the most natural interface for humans to interact and exchange information about concrete entities like KNIFE, HAMMER and abstract concepts such as MAKE, USE. This research domain is very important not only for the advances that it can produce in the design of human-robot communication systems, but also for the implication that it can have on cognitive science. Abstract words are used in daily conversations among people to describe events and situations that occur in the environment. Many scholars have suggested that the distinction between concrete and abstract words is a continuum according to which all entities can be varied in their level of abstractness. The work presented herein aimed to ground abstract concepts, similarly to concrete ones, in perception and action systems. This permitted to investigate how different behavioural and cognitive capabilities can be integrated in a humanoid robot in order to bootstrap the development of higher-order skills such as the acquisition of abstract words. To this end, three neuro-robotics models were implemented. The first neuro-robotics experiment consisted in training a humanoid robot to perform a set of motor primitives (e.g. PUSH, PULL, etc.) that hierarchically combined led to the acquisition of higher-order words (e.g. ACCEPT, REJECT). The implementation of this model, based on a feed-forward artificial neural networks, permitted the assessment of the training methodology adopted for the grounding of language in humanoid robots. In the second experiment, the architecture used for carrying out the first study was reimplemented employing recurrent artificial neural networks that enabled the temporal specification of the action primitives to be executed by the robot. This permitted to increase the combinations of actions that can be taught to the robot for the generation of more complex movements. For the third experiment, a model based on recurrent neural networks that integrated multi-modal inputs (i.e. language, vision and proprioception) was implemented for the grounding of abstract action words (e.g. USE, MAKE). Abstract representations of actions ("one-hot" encoding) used in the other two experiments, were replaced with the joints values recorded from the iCub robot sensors. Experimental results showed that motor primitives have different activation patterns according to the action's sequence in which they are embedded. Furthermore, the performed simulations suggested that the acquisition of concepts related to abstract action words requires the reactivation of similar internal representations activated during the acquisition of the basic concepts, directly grounded in perceptual and sensorimotor knowledge, contained in the hierarchical structure of the words used to ground the abstract action words.This study was financed by the EU project RobotDoC (235065) from the Seventh Framework Programme (FP7), Marie Curie Actions Initial Training Network

    Télé-opération Corps Complet de Robots Humanoïdes

    Get PDF
    This thesis aims to investigate systems and tools for teleoperating a humanoid robot. Robotteleoperation is crucial to send and control robots in environments that are dangerous or inaccessiblefor humans (e.g., disaster response scenarios, contaminated environments, or extraterrestrialsites). The term teleoperation most commonly refers to direct and continuous control of a robot.In this case, the human operator guides the motion of the robot with her/his own physical motionor through some physical input device. One of the main challenges is to control the robot in a waythat guarantees its dynamical balance while trying to follow the human references. In addition,the human operator needs some feedback about the state of the robot and its work site through remotesensors in order to comprehend the situation or feel physically present at the site, producingeffective robot behaviors. Complications arise when the communication network is non-ideal. Inthis case the commands from human to robot together with the feedback from robot to human canbe delayed. These delays can be very disturbing for the human operator, who cannot teleoperatetheir robot avatar in an effective way.Another crucial point to consider when setting up a teleoperation system is the large numberof parameters that have to be tuned to effectively control the teleoperated robots. Machinelearning approaches and stochastic optimizers can be used to automate the learning of some of theparameters.In this thesis, we proposed a teleoperation system that has been tested on the humanoid robotiCub. We used an inertial-technology-based motion capture suit as input device to control thehumanoid and a virtual reality headset connected to the robot cameras to get some visual feedback.We first translated the human movements into equivalent robot ones by developping a motionretargeting approach that achieves human-likeness while trying to ensure the feasibility of thetransferred motion. We then implemented a whole-body controller to enable the robot to trackthe retargeted human motion. The controller has been later optimized in simulation to achieve agood tracking of the whole-body reference movements, by recurring to a multi-objective stochasticoptimizer, which allowed us to find robust solutions working on the real robot in few trials.To teleoperate walking motions, we implemented a higher-level teleoperation mode in whichthe user can use a joystick to send reference commands to the robot. We integrated this setting inthe teleoperation system, which allows the user to switch between the two different modes.A major problem preventing the deployment of such systems in real applications is the presenceof communication delays between the human input and the feedback from the robot: evena few hundred milliseconds of delay can irremediably disturb the operator, let alone a few seconds.To overcome these delays, we introduced a system in which a humanoid robot executescommands before it actually receives them, so that the visual feedback appears to be synchronizedto the operator, whereas the robot executed the commands in the past. To do so, the robot continuouslypredicts future commands by querying a machine learning model that is trained on pasttrajectories and conditioned on the last received commands.Cette thĂšse vise Ă  Ă©tudier des systĂšmes et des outils pour la tĂ©lĂ©-opĂ©ration d’un robot humanoĂŻde.La tĂ©lĂ©opĂ©ration de robots est cruciale pour envoyer et contrĂŽler les robots dans des environnementsdangereux ou inaccessibles pour les humains (par exemple, des scĂ©narios d’interventionen cas de catastrophe, des environnements contaminĂ©s ou des sites extraterrestres). Le terme tĂ©lĂ©opĂ©rationdĂ©signe le plus souvent le contrĂŽle direct et continu d’un robot. Dans ce cas, l’opĂ©rateurhumain guide le mouvement du robot avec son propre mouvement physique ou via un dispositifde contrĂŽle. L’un des principaux dĂ©fis est de contrĂŽler le robot de maniĂšre Ă  garantir son Ă©quilibredynamique tout en essayant de suivre les rĂ©fĂ©rences humaines. De plus, l’opĂ©rateur humain abesoin d’un retour d’information sur l’état du robot et de son site via des capteurs Ă  distance afind’apprĂ©hender la situation ou de se sentir physiquement prĂ©sent sur le site, produisant des comportementsde robot efficaces. Des complications surviennent lorsque le rĂ©seau de communicationn’est pas idĂ©al. Dans ce cas, les commandes de l’homme au robot ainsi que la rĂ©troaction du robotĂ  l’homme peuvent ĂȘtre retardĂ©es. Ces dĂ©lais peuvent ĂȘtre trĂšs gĂȘnants pour l’opĂ©rateur humain,qui ne peut pas tĂ©lĂ©-opĂ©rer efficacement son avatar robotique.Un autre point crucial Ă  considĂ©rer lors de la mise en place d’un systĂšme de tĂ©lĂ©-opĂ©rationest le grand nombre de paramĂštres qui doivent ĂȘtre rĂ©glĂ©s pour contrĂŽler efficacement les robotstĂ©lĂ©-opĂ©rĂ©s. Des approches d’apprentissage automatique et des optimiseurs stochastiques peuventĂȘtre utilisĂ©s pour automatiser l’apprentissage de certains paramĂštres.Dans cette thĂšse, nous avons proposĂ© un systĂšme de tĂ©lĂ©-opĂ©ration qui a Ă©tĂ© testĂ© sur le robothumanoĂŻde iCub. Nous avons utilisĂ© une combinaison de capture de mouvement basĂ©e sur latechnologie inertielle comme pĂ©riphĂ©rique de contrĂŽle pour l’humanoĂŻde et un casque de rĂ©alitĂ©virtuelle connectĂ© aux camĂ©ras du robot pour obtenir un retour visuel. Nous avons d’abord traduitles mouvements humains en mouvements robotiques Ă©quivalents en dĂ©veloppant une approchede retargeting de mouvement qui atteint la ressemblance humaine tout en essayant d’assurer lafaisabilitĂ© du mouvement transfĂ©rĂ©. Nous avons ensuite implĂ©mentĂ© un contrĂŽleur du corps entierpour permettre au robot de suivre le mouvement humain reciblĂ©. Le contrĂŽleur a ensuite Ă©tĂ©optimisĂ© en simulation pour obtenir un bon suivi des mouvements de rĂ©fĂ©rence du corps entier,en recourant Ă  un optimiseur stochastique multi-objectifs, ce qui nous a permis de trouver dessolutions robustes fonctionnant sur le robot rĂ©el en quelques essais.Pour tĂ©lĂ©-opĂ©rer les mouvements de marche, nous avons implĂ©mentĂ© un mode de tĂ©lĂ©-opĂ©rationde niveau supĂ©rieur dans lequel l’utilisateur peut utiliser un joystick pour envoyer des commandesde rĂ©fĂ©rence au robot. Nous avons intĂ©grĂ© ce paramĂštre dans le systĂšme de tĂ©lĂ©-opĂ©ration, ce quipermet Ă  l’utilisateur de basculer entre les deux modes diffĂ©rents.Un problĂšme majeur empĂȘchant le dĂ©ploiement de tels systĂšmes dans des applications rĂ©ellesest la prĂ©sence de retards de communication entre l’entrĂ©e humaine et le retour du robot: mĂȘmequelques centaines de millisecondes de retard peuvent irrĂ©mĂ©diablement perturber l’opĂ©rateur,encore plus quelques secondes. Pour surmonter ces retards, nous avons introduit un systĂšme danslequel un robot humanoĂŻde exĂ©cute des commandes avant de les recevoir, de sorte que le retourvisuel semble ĂȘtre synchronisĂ© avec l’opĂ©rateur, alors que le robot exĂ©cutait les commandes dansle passĂ©. Pour ce faire, le robot prĂ©dit en permanence les commandes futures en interrogeant unmodĂšle d’apprentissage automatique formĂ© sur les trajectoires passĂ©es et conditionnĂ© aux derniĂšrescommandes reçues

    Sensorimotor representation learning for an "active self" in robots: A model survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in \sout{humans' world} \rev{spaces populated by people} and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyse what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration

    Whole-body multi-contact motion in humans and humanoids: Advances of the CoDyCo European project

    Get PDF
    International audienceTraditional industrial applications involve robots with limited mobility. Consequently, interaction (e.g. manipulation) was treated separately from whole-body posture (e.g. balancing), assuming the robot firmly connected to the ground. Foreseen applications involve robots with augmented autonomy and physical mobility. Within this novel context, physical interaction influences stability and balance. To allow robots to surpass barriers between interaction and posture control, forthcoming robotic research needs to investigate the principles governing whole-body motion and coordination with contact dynamics. There is a need to investigate the principles of motion and coordination of physical interaction, including the aspects related to unpredictability. Recent developments in compliant actuation and touch sensing allow safe and robust physical interaction from unexpected contact including humans. The next advancement for cognitive robots, however, is the ability not only to cope with unpredictable contact, but also to exploit predictable contact in ways that will assist in goal achievement. Last but not least, theoretical results needs to be validated in real-world scenarios with humanoid robots engaged in whole-body goal-directed tasks. Robots should be capable of exploiting rigid supportive contacts, learning to compensate for compliant contacts, and utilising assistive physical interaction from humans. The work presented in this paper presents state-of-the-art in these domains as well as some recent advances made within the framework of the CoDyCo European project

    Learning Intention Aware Online Adaptation of Movement Primitives

    Get PDF
    In order to operate close to non-experts, future robots require both an intuitive form of instruction accessible to laymen and the ability to react appropriately to a human co-worker. Instruction by imitation learning with probabilistic movement primitives (ProMPs) allows capturing tasks by learning robot trajectories from demonstrations, including the motion variability. However, appropriate responses to human co-workers during the execution of the learned movements are crucial for fluent task execution, perceived safety, and subjective comfort. To facilitate such appropriate responsive behaviors in human-robot interaction, the robot needs to be able to react to its human workspace co-inhabitant online during the execution of the ProMPs. Thus, we learn a goal-based intention prediction model from human motions. Using this probabilistic model, we introduce intention-aware online adaptation to ProMPs. We compare two different novel approaches: First, online spatial deformation, which avoids collisions by changing the shape of the ProMP trajectories dynamically during execution while staying close to the demonstrated motions and second, online temporal scaling, which adapts the velocity profile of a ProMP to avoid time-dependent collisions. We evaluate both approaches in experiments with non-expert users. The subjects reported a higher level of perceived safety and felt less disturbed during intention aware adaptation, in particular during spatial deformation, compared to non-adaptive behavior of the robot

    Sensorimotor Representation Learning for an “Active Self” in Robots: A Model Survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Projekt DEALPeer Reviewe
    • 

    corecore