10 research outputs found

    Motion Planning for Multi-Contact Visual Servoing on Humanoid Robots

    Get PDF
    International audienceThis paper describes the implementation of a canonical motion generation pipeline guided by vision for a TALOS humanoid robot. The proposed system is using a mul-ticontact planner, a Differential Dynamic Programming (DDP) algorithm, and a stabilizer. The multicontact planner provides a set of contacts and dynamically consistent trajectories for the Center-Of-Mass (CoM) and the Center-Of-Pressure (CoP). It provides a structure to initialize a DDP algorithm which, in turn, provides a dynamically consistent trajectory for all the joints as it integrates all the dynamics of the robot, together with rigid contact models and the visual task. Tested on Gazebo the resulting trajectory had to be stabilized with a state-of-the-art algorithm to be successful. In addition to testing motion generated from high specifications to the stabilized motion in simulation, we express visual features at Whole Body Generator level which is a DDP formulated solver. It handles non-linearities as the ones introduced by the projections of visual features expressed and minimized in the image plan of the camera

    An humanoid robot for inspections and cleaning tasks in nuclear glove box

    Get PDF
    This article presents an opportunity evaluation of the use of humanoid robots in a nuclear environment. The project worked on the DaRwIn-OP platform to assess and carry out the modifications the robot needed to enable it to perform as an intervention operator in a nuclear location. The study had two main lines, based on equipping the humanoid with a radiological measurement capture system and with an arm command system using a depth camera. The tests performed showed the robot's ability to make radiological measurements with the built in detector and to collect swipe samples to assess the contamination of an object

    Robot humanoïde d'inspection et d'assainissement en boite gants nucléaire

    Get PDF
    Ce travail présente une évaluation de l'opportu-nité d'utiliser des robots humanoïdes en milieu nucléaire. Ce projet a utilisé la plateforme du DaRwIn-OP pour lui apporter les modifications nécessaires afin d'en faire un opérateur d'intervention en milieu nucléaire. Les deux axes de travail ont consisté à équiper l'humanoïde d'un capteur de mesure radiologique et d'une commande des bras par une caméra en champ profond. Les tests réalisés montrent la capacité de réaliser des mesures radiologiques au moyen du capteur intégré et la réalisation de frottis pour évaluer la contamination d'un objet

    LOCOMOTION GENERALISEE REACTIVE BASEE VISION

    No full text
    National audienceHumanoid robots need exteroceptive sensors such as cameras to perceive their environment and fulfill tasks in it. This thesis deals with the integration of visual information for robot control. More specifically, in order to realize a behavior, visual data are needed to drive the robot's whole body trajectory generator either on flat ground or in multicontact. We first recall how a humanoid robot is controlled for a locomotion task, starting from the reference positions sent to the planner that computes sequence of contacts used to generate centroidal trajectory. This one is injected in a Whole body trajectory generator that provides joint trajectories to be sent to the robot through a stabilizer. Depending on the type of data given by the vision block algorithm (considered as an input during this thesis), visual loops can be made at different levels of the previous pipeline. The objectives were to use on the shelf visual blocks outputs to provide experimental results bas! ed on former blocks integration. We treated first motion capture data as high level information giving them to a Pattern Generator (PG) in charge of computing steps for the robot. One goal was to realize integrative tests for the Koroibot challenge by connecting motions created to pass obstacles like stairs or a beam. Results on the robot were not satisfying due to poor motion repeatability. The fault was due to the assumptions used between model and real robot or external phenomena like mechanical wear and stabilizer effects. To have better quantification of the repeatability and reliability of the walking algorithms on the HRP2 robot, we realized experiments in collaboration with the French Metrology and Tests Laboratory (LNE). Our collaborators provided test plateforms like climatic room, adjustable angle slope and horizontal oscillations floor to measure Key Performance Indicators (KPI). Finally, to reach multicontact motions based on vision output, 2D features projecte! d on image camera plan have been expressed in a promising opti! mal control solver called DDP (Differential Dynamic Programming). It allows to take into account non-linearities of the features projection directly in the whole body trajectory generator. Simulations for locomotion motions with multicontact using simulated visual features were provided with the robot TALOS. The remaining main issue lies in the inequality constraints that are not implemented yet in the DDP solver core. In that last part, all the elements of the pipeline previously exposed are used together : from the pose specification to the motion passed on simulation that uses stabilization module before beeing sent to the actuator commands.Les robots humanoĂŻdes ont besoin de capteurs exteroceptifs tels que des camĂ©ras pour percevoir leur environnement et accomplir des tĂąches dans celui-ci. Cette thĂšse s’articule autour de l'intĂ©gration d'informations visuelles pour la commande de robots. Plus spĂ©cifiquement, afin de gĂ©nĂ©rer un mouvement, des donnĂ©es visuelles sont nĂ©cessaires pour piloter le gĂ©nĂ©rateur de trajectoires corps complet sur terrain plat ou en multicontact. Nous rappelons tout d'abord comment un robot humanoĂŻde est contrĂŽlĂ© pour une tĂąche de locomotion, Ă  partir des positions de rĂ©fĂ©rence envoyĂ©es au planificateur qui calcule la sĂ©quence des contacts utilisĂ©s ensuite pour gĂ©nĂ©rer la trajectoire centroĂŻdale. Celle-ci est injectĂ©e dans un gĂ©nĂ©rateur de trajectoires corps complet qui fournit des trajectoires articulaires Ă  envoyer au robot via un stabilisateur. En fonction du type de donnĂ©es fournies par l'algorithme de vision (considĂ©rĂ© comme une entrĂ©e lors de ce! tte thĂšse), des boucles visuelles peuvent ĂȘtre rĂ©alisĂ©es Ă  diffĂ©rents niveaux du pipeline prĂ©cĂ©dent. L'objectif Ă©tait d'utiliser des sorties d’algorithmes de traitements visuels dĂ©jĂ  disponibles pour fournir des rĂ©sultats expĂ©rimentaux basĂ©s sur l'intĂ©gration des diffĂ©rents algorithmes mentionnĂ©s. Nous avons d’abord traitĂ© les donnĂ©es de motion capture comme des informations de haut niveau et les avons transmises Ă  un gĂ©nĂ©rateur d'allures (PG) chargĂ© de calculer les pas du robot. L'un des objectifs Ă©tait de rĂ©aliser des tests d'intĂ©gration dans le projet europĂ©en KoroiBot en connectant les mouvements crĂ©Ă©s pour franchir des obstacles tels que des escaliers et une poutre. Les rĂ©sultats sur le robot n'Ă©taient pas Ă  la hauteur de nos espĂ©rances en raison d'une mauvaise rĂ©pĂ©tabilitĂ© du mouvement. En cause : les hypothĂšses utilisĂ©es entre le modĂšle et le robot rĂ©el ainsi que certains phĂ©nomĂšnes externes comme l'usure mĂ©canique et le! s effets du stabilisateur. Afin de mieux quantifier la rĂ©pĂ©t! abilitĂ© et la fiabilitĂ© des algorithmes de marche du robot HRP2, nous avons rĂ©alisĂ© des expĂ©riences en collaboration avec le Laboratoire National de mĂ©trologie et d'Essais (LNE). Nos collaborateurs ont fourni des plates-formes d'essais parmi lesquelles une salle climatique, un plan Ă  inclinaison rĂ©glable et un plancher oscillant horizontalement pour mesurer des indicateurs de performance. Enfin, pour obtenir des mouvements en multicontacts basĂ©s sur les donnĂ©es visuelles, les projections des points d’intĂ©rĂȘt sur le plan de la camĂ©ra ont Ă©tĂ© exprimĂ©es dans un problĂšme de contrĂŽle optimal et implĂ©mentĂ©es dans un solveur prometteur appelĂ© DDP (Differential Dynamic Programming). Il permet de prendre en compte les non-linĂ©aritĂ©s de la projection des points d’intĂ©rĂȘts directement dans le gĂ©nĂ©rateur de trajectoire du corps complet. Le robot TALOS a Ă©tĂ© utilisĂ© pour la simulation de mouvements de locomotion en multicontact Ă  l'aide de donnĂ©es vis! uelles. Le principal inconvĂ©nient rĂ©side dans les contraintes d'inĂ©galitĂ© qui ne sont pas encore mises en Ɠuvre dans l’implĂ©mentation du solveur DDP. Tous les Ă©lĂ©ments prĂ©cĂ©demment exposĂ©s sont utilisĂ©s ensemble depuis la spĂ©cification des positions de rĂ©fĂ©rence jusqu'Ă  la simulation de mouvement en simulateur qui utilise le module de stabilisation avant d'ĂȘtre envoyĂ©s aux commandes de l'actionneur

    LOCOMOTION GENERALISEE REACTIVE BASEE VISION

    No full text
    National audienceHumanoid robots need exteroceptive sensors such as cameras to perceive their environment and fulfill tasks in it. This thesis deals with the integration of visual information for robot control. More specifically, in order to realize a behavior, visual data are needed to drive the robot's whole body trajectory generator either on flat ground or in multicontact. We first recall how a humanoid robot is controlled for a locomotion task, starting from the reference positions sent to the planner that computes sequence of contacts used to generate centroidal trajectory. This one is injected in a Whole body trajectory generator that provides joint trajectories to be sent to the robot through a stabilizer. Depending on the type of data given by the vision block algorithm (considered as an input during this thesis), visual loops can be made at different levels of the previous pipeline. The objectives were to use on the shelf visual blocks outputs to provide experimental results bas! ed on former blocks integration. We treated first motion capture data as high level information giving them to a Pattern Generator (PG) in charge of computing steps for the robot. One goal was to realize integrative tests for the Koroibot challenge by connecting motions created to pass obstacles like stairs or a beam. Results on the robot were not satisfying due to poor motion repeatability. The fault was due to the assumptions used between model and real robot or external phenomena like mechanical wear and stabilizer effects. To have better quantification of the repeatability and reliability of the walking algorithms on the HRP2 robot, we realized experiments in collaboration with the French Metrology and Tests Laboratory (LNE). Our collaborators provided test plateforms like climatic room, adjustable angle slope and horizontal oscillations floor to measure Key Performance Indicators (KPI). Finally, to reach multicontact motions based on vision output, 2D features projecte! d on image camera plan have been expressed in a promising opti! mal control solver called DDP (Differential Dynamic Programming). It allows to take into account non-linearities of the features projection directly in the whole body trajectory generator. Simulations for locomotion motions with multicontact using simulated visual features were provided with the robot TALOS. The remaining main issue lies in the inequality constraints that are not implemented yet in the DDP solver core. In that last part, all the elements of the pipeline previously exposed are used together : from the pose specification to the motion passed on simulation that uses stabilization module before beeing sent to the actuator commands.Les robots humanoĂŻdes ont besoin de capteurs exteroceptifs tels que des camĂ©ras pour percevoir leur environnement et accomplir des tĂąches dans celui-ci. Cette thĂšse s’articule autour de l'intĂ©gration d'informations visuelles pour la commande de robots. Plus spĂ©cifiquement, afin de gĂ©nĂ©rer un mouvement, des donnĂ©es visuelles sont nĂ©cessaires pour piloter le gĂ©nĂ©rateur de trajectoires corps complet sur terrain plat ou en multicontact. Nous rappelons tout d'abord comment un robot humanoĂŻde est contrĂŽlĂ© pour une tĂąche de locomotion, Ă  partir des positions de rĂ©fĂ©rence envoyĂ©es au planificateur qui calcule la sĂ©quence des contacts utilisĂ©s ensuite pour gĂ©nĂ©rer la trajectoire centroĂŻdale. Celle-ci est injectĂ©e dans un gĂ©nĂ©rateur de trajectoires corps complet qui fournit des trajectoires articulaires Ă  envoyer au robot via un stabilisateur. En fonction du type de donnĂ©es fournies par l'algorithme de vision (considĂ©rĂ© comme une entrĂ©e lors de ce! tte thĂšse), des boucles visuelles peuvent ĂȘtre rĂ©alisĂ©es Ă  diffĂ©rents niveaux du pipeline prĂ©cĂ©dent. L'objectif Ă©tait d'utiliser des sorties d’algorithmes de traitements visuels dĂ©jĂ  disponibles pour fournir des rĂ©sultats expĂ©rimentaux basĂ©s sur l'intĂ©gration des diffĂ©rents algorithmes mentionnĂ©s. Nous avons d’abord traitĂ© les donnĂ©es de motion capture comme des informations de haut niveau et les avons transmises Ă  un gĂ©nĂ©rateur d'allures (PG) chargĂ© de calculer les pas du robot. L'un des objectifs Ă©tait de rĂ©aliser des tests d'intĂ©gration dans le projet europĂ©en KoroiBot en connectant les mouvements crĂ©Ă©s pour franchir des obstacles tels que des escaliers et une poutre. Les rĂ©sultats sur le robot n'Ă©taient pas Ă  la hauteur de nos espĂ©rances en raison d'une mauvaise rĂ©pĂ©tabilitĂ© du mouvement. En cause : les hypothĂšses utilisĂ©es entre le modĂšle et le robot rĂ©el ainsi que certains phĂ©nomĂšnes externes comme l'usure mĂ©canique et le! s effets du stabilisateur. Afin de mieux quantifier la rĂ©pĂ©t! abilitĂ© et la fiabilitĂ© des algorithmes de marche du robot HRP2, nous avons rĂ©alisĂ© des expĂ©riences en collaboration avec le Laboratoire National de mĂ©trologie et d'Essais (LNE). Nos collaborateurs ont fourni des plates-formes d'essais parmi lesquelles une salle climatique, un plan Ă  inclinaison rĂ©glable et un plancher oscillant horizontalement pour mesurer des indicateurs de performance. Enfin, pour obtenir des mouvements en multicontacts basĂ©s sur les donnĂ©es visuelles, les projections des points d’intĂ©rĂȘt sur le plan de la camĂ©ra ont Ă©tĂ© exprimĂ©es dans un problĂšme de contrĂŽle optimal et implĂ©mentĂ©es dans un solveur prometteur appelĂ© DDP (Differential Dynamic Programming). Il permet de prendre en compte les non-linĂ©aritĂ©s de la projection des points d’intĂ©rĂȘts directement dans le gĂ©nĂ©rateur de trajectoire du corps complet. Le robot TALOS a Ă©tĂ© utilisĂ© pour la simulation de mouvements de locomotion en multicontact Ă  l'aide de donnĂ©es vis! uelles. Le principal inconvĂ©nient rĂ©side dans les contraintes d'inĂ©galitĂ© qui ne sont pas encore mises en Ɠuvre dans l’implĂ©mentation du solveur DDP. Tous les Ă©lĂ©ments prĂ©cĂ©demment exposĂ©s sont utilisĂ©s ensemble depuis la spĂ©cification des positions de rĂ©fĂ©rence jusqu'Ă  la simulation de mouvement en simulateur qui utilise le module de stabilisation avant d'ĂȘtre envoyĂ©s aux commandes de l'actionneur

    Motion Planning with Multi-Contact and Visual Servoing on Humanoid Robots

    Get PDF
    International audienceThis paper describes the implementation of a canonical motion generation pipeline guided by vision for a TALOS humanoid robot. The proposed system is using a mul-ticontact planner, a Differential Dynamic Programming (DDP) algorithm, and a stabilizer. The multicontact planner provides a set of contacts and dynamically consistent trajectories for the Center-Of-Mass (CoM) and the Center-Of-Pressure (CoP). It provides a structure to initialize a DDP algorithm which, in turn, provides a dynamically consistent trajectory for all the joints as it integrates all the dynamics of the robot, together with rigid contact models and the visual task. Tested on Gazebo the resulting trajectory had to be stabilized with a state-of-the-art algorithm to be successful. In addition to testing motion generated from high specifications to the stabilized motion in simulation, we express visual features at Whole Body Generator level which is a DDP formulated solver. It handles non-linearities as the ones introduced by the projections of visual features expressed and minimized in the image plan of the camera

    Implementation, Identification and Control of an Efficient Electric Actuator for Humanoid Robots

    No full text
    International audienceAutonomous robots such as legged robots and mobile manipulators imply new challenges in the design and the control of their actuators. In particular, it is desirable that the actuators are back-drivable, efficient (low friction) and compact. In this paper, we report the complete implementation of an advanced actuator based on screw, nut and cable. This actuator has been chosen for the humanoid robot Romeo. A similar model of the actuator has been used to control the humanoid robot Valkyrie. We expose the design of this actuator and present its Lagrangian model. The actuator being flexible, we propose a two-layer optimal control solver based on Differential Dynamical Programming. The actuator design, model identification and control is validated on a full actuator mounted in a work bench. The results show that this type of actuation is very suitable for legged robots and is a good candidate to replace strain wave gears

    Benchmarking the HRP-2 humanoid robot during locomotion

    Get PDF
    International audienceIn this paper we report results from a campaign of measurement in a laboratory allowing to put a humanoid robot HRP-2 in a controlled environment. We have investigated the effect of temperature variations on the robot capabilities to walk. In order to benchmark various motions modalities and algorithms we computed a set of performance indicators for bipedal locomotion. The scope of the algorithms for motion generation evaluated here is rather large as it spans analytical solutions to numerical optimization approaches able to realize real-time walking or multi-contacts

    TALOS: A new humanoid research platform targeted for industrial applications

    No full text
    International audienceThe upcoming generation of humanoid robots will have to be equipped with state-of-the-art technical features along with high industrial quality, but they should also offer the prospect of effective physical human interaction. In this paper we introduce a new humanoid robot capable of interacting with a human environment and targeting a whole range of industrial applications. This robot is able to handle weights of 6 Kg with an outstretched arm, and has powerful motors to carry out movements unavailable in previous generations of humanoid robots. Its kinematics has been specially designed for screwing and drilling motions. In order to make interaction possible with human operators, this robot is equipped with torque sensors to measure joint effort and high resolution encoders to measure both motor and joint positions. The humanoid robotics field has reached a stage where robustness and repeatibility is the next watershed. We believe that, this robot has the potential to become a powerful tool for the research community to successfully navigate this turning point, as the humanoid robot HRP-2 was in its own time

    Multi-contact Locomotion of Legged Robots in Complex Environments – The Loco3D project

    No full text
    International audiencePlanning, adapting and executing multi-contact locomotion movements on legged robots in complex environments remains an open problem. In this proposal, we introduce a complete pipeline to address this issue in the context of humanoid robots inside industrial environments. This pipeline relies on a multi-stage approach in order to simplify the process flow and to exploit at best state-of-the-art techniques both in terms of contact planning, whole-body control and perception. The main challenges lie in the choice of the different modules composing this pipeline as well as their mutual interactions: e.g. at which frequency rates each module has to work in order to allow safe and robust locomotion? or which information must transit between the modules? We named this project Loco3D standing for Locomotion in 3D, in contrast to the classic locomotion on quasi-flat terrains, where the motion of the center of mass of the robot is mostly limited to a 2D plane
    corecore