10 research outputs found

    Review of Anthropomorphic Head Stabilisation and Verticality Estimation in Robots

    Get PDF
    International audienceIn many walking, running, flying, and swimming animals, including mammals, reptiles, and birds, the vestibular system plays a central role for verticality estimation and is often associated with a head sta-bilisation (in rotation) behaviour. Head stabilisation, in turn, subserves gaze stabilisation, postural control, visual-vestibular information fusion and spatial awareness via the active establishment of a quasi-inertial frame of reference. Head stabilisation helps animals to cope with the computational consequences of angular movements that complicate the reliable estimation of the vertical direction. We suggest that this strategy could also benefit free-moving robotic systems, such as locomoting humanoid robots, which are typically equipped with inertial measurements units. Free-moving robotic systems could gain the full benefits of inertial measurements if the measurement units are placed on independently orientable platforms, such as a human-like heads. We illustrate these benefits by analysing recent humanoid robots design and control approaches

    Generation of whole-body motion for humanoid robots with the complete dynamics

    Get PDF
    Cette thĂšse propose une solution au problĂšme de la gĂ©nĂ©ration de mouvements pour les robots humanoĂŻdes. Le cadre qui est proposĂ© dans cette thĂšse gĂ©nĂšre des mouvements corps-complet en utilisant la dynamique inverse avec l'espace des tĂąches et en satisfaisant toutes les contraintes de contact. La spĂ©cification des mouvements se fait Ă  travers objectifs dans l'espace des tĂąches et la grande redondance du systĂšme est gĂ©rĂ©e avec une pile de tĂąches oĂč les tĂąches moins prioritaires sont atteintes seulement si elles n'interfĂšrent pas avec celles de plus haute prioritĂ©. À cette fin, un QP hiĂ©rarchique est utilisĂ©, avec l'avantage d'ĂȘtre en mesure de prĂ©ciser tĂąches d'Ă©galitĂ© ou d'inĂ©galitĂ© Ă  tous les niveaux de la hiĂ©rarchie. La capacitĂ© de traiter plusieurs contacts non-coplanaires est montrĂ©e par des mouvements oĂč le robot s'assoit sur une chaise et monte une Ă©chelle. Le cadre gĂ©nĂ©rique de gĂ©nĂ©ration de mouvements est ensuite appliquĂ© Ă  des Ă©tudes de cas Ă  l'aide de HRP-2 et Romeo. Les mouvements complexes et similaires Ă  l'humain sont obtenus en utilisant l'imitation du mouvement humain oĂč le mouvement acquis passe par un processus cinĂ©matique et dynamique. Pour faire face Ă  la nature instantanĂ©e de la dynamique inverse, un gĂ©nĂ©rateur de cycle de marche est utilisĂ© comme entrĂ©e pour la pile de tĂąches qui effectue une correction locale de la position des pieds sur la base des points de contact permettant de marcher sur un terrain accidentĂ©. La vision stĂ©rĂ©o est Ă©galement introduite pour aider dans le processus de marche. Pour une rĂ©cupĂ©ration rapide d'Ă©quilibre, le capture point est utilisĂ© comme une tĂąche contrĂŽlĂ©e dans une rĂ©gion dĂ©sirĂ©e de l'espace. En outre, la gĂ©nĂ©ration de mouvements est prĂ©sentĂ©e pour CHIMP, qui a besoin d'un traitement particulier.This thesis aims at providing a solution to the problem of motion generation for humanoid robots. The proposed framework generates whole-body motion using the complete robot dynamics in the task space satisfying contact constraints. This approach is known as operational-space inverse-dynamics control. The specification of the movements is done through objectives in the task space, and the high redundancy of the system is handled with a prioritized stack of tasks where lower priority tasks are only achieved if they do not interfere with higher priority ones. To this end, a hierarchical quadratic program is used, with the advantage of being able to specify tasks as equalities or inequalities at any level of the hierarchy. Motions where the robot sits down in an armchair and climbs a ladder show the capability to handle multiple non-coplanar contacts. The generic motion generation framework is then applied to some case studies using HRP-2 and Romeo. Complex and human-like movements are achieved using human motion imitation where the acquired motion passes through a kinematic and then dynamic retargeting processes. To deal with the instantaneous nature of inverse dynamics, a walking pattern generator is used as an input for the stack of tasks which makes a local correction of the feet position based on the contact points allowing to walk on non-planar surfaces. Visual feedback is also introduced to aid in the walking process. Alternatively, for a fast balance recovery, the capture point is introduced in the framework as a task and it is controlled within a desired region of space. Also, motion generation is presented for CHIMP which is a robot that needs a particular treatment

    Transfert de Mouvement Humain vers Robot HumanoĂŻde

    Get PDF
    Le but de cette thĂšse est le transfert du mouvement humain vers un robot humanoĂŻde en ligne. Dans une premiĂšre partie, le mouvement humain, enregistrĂ© par un systĂšme de capture de mouvement, est analysĂ© pour extraire des caractĂ©ristiques qui doivent ĂȘtre transfĂ©rĂ©es vers le robot humanoĂŻde. Dans un deuxiĂšme temps, le mouvement du robot qui comprend ces caractĂ©ristiques est calculĂ© en utilisant la cinĂ©matique inverse avec prioritĂ©. L'ensemble des tĂąches avec leurs prioritĂ©s est ainsi transfĂ©rĂ©. La mĂ©thode permet une reproduction du mouvement la plus fidĂšle possible, en ligne et pour le haut du corps. Finalement, nous Ă©tudions le problĂšme du transfert mouvement des pieds. Pour cette Ă©tude, le mouvement des pieds est analysĂ© pour extraire les trajectoires euclidiennes qui sont adaptĂ©es au robot. Les trajectoires du centre du masse qui garantit que le robot ne tombe pas sont calculĂ©es `a partir de la position des pieds et du modĂšle du pendule inverse. Il est ainsi possible rĂ©aliser une imitation complĂšte incluant les mouvements du haut du corps ainsi que les mouvements des pieds. ABSTRACT : The aim of this thesis is to transfer human motion to a humanoid robot online. In the first part of this work, the human motion recorded by a motion capture system is analyzed to extract salient features that are to be transferred on the humanoid robot. We introduce the humanoid normalized model as the set of motion properties. In the second part of this work, the robot motion that includes the human motion features is computed using the inverse kinematics with priority. In order to transfer the motion properties a stack of tasks is predefined. Each motion property in the humanoid normalized model corresponds to one target in the stack of tasks. We propose a framework to transfer human motion online as close as possible to a human motion performance for the upper body. Finally, we study the problem of transfering feet motion. In this study, the motion of feet is analyzed to extract the Euclidean trajectories adapted to the robot. Moreover, the trajectory of the center of mass which ensures that the robot does not fall is calculated from the feet positions and the inverse pendulum model of the robot. Using this result, it is possible to achieve complete imitation of upper body movements and including feet motio

    Humanizing robot dance movements

    Get PDF
    Tese de mestrado integrado. Engenharia Informåtica e Computação. Universidade do Porto. Faculdade de Engenharia. 201

    Human Motion Transfer on Humanoid Robot

    Get PDF
    The aim of this thesis is to transfer human motion to a humanoid robot online. In the first part of this work, the human motion recorded by a motion capture system is analyzed to extract salient features that are to be transferred on the humanoid robot. We introduce the humanoid normalized model as the set of motion properties. In the second part of this work, the robot motion that includes the human motion features is computed using the inverse kinematics with priority. In order to transfer the motion properties a stack of tasks is predefined. Each motion property in the humanoid normalized model corresponds to one target in the stack of tasks. We propose a framework to transfer human motion online as close as possible to a human motion performance for the upper body. Finally, we study the problem of transfering feet motion. In this study, the motion of feet is analyzed to extract the Euclidean trajectories adapted to the robot. Moreover, the trajectory of the center of mass which ensures that the robot does not fall is calculated from the feet positions and the inverse pendulum model of the robot. Using this result, it is possible to achieve complete imitation of upper body movements and including feet motio

    Design, modelling and control of a biped robot platform based on Poppy project

    Get PDF
    Taking as a reference the open source 3D printed robot called ''Poppy'' (https://www.poppy-project.org/) this project aims to develop a new biped robot platform using standard size servomotors. To accomplish this project is required to design a new structure to be able to host the chosen motors and also add a new degree of freedom for each leg, in order to obtain a 12 DoF robot. The next objetive is the modelling and simulation. For that purpose Gazebo simulator will be used to provide the option to be controlled with ROS. In addition various sensors will be added to the model, in order to obtain the feedback necessary for the control algorithm, which is the last objetive of this project

    Télé-opération Corps Complet de Robots Humanoïdes

    Get PDF
    This thesis aims to investigate systems and tools for teleoperating a humanoid robot. Robotteleoperation is crucial to send and control robots in environments that are dangerous or inaccessiblefor humans (e.g., disaster response scenarios, contaminated environments, or extraterrestrialsites). The term teleoperation most commonly refers to direct and continuous control of a robot.In this case, the human operator guides the motion of the robot with her/his own physical motionor through some physical input device. One of the main challenges is to control the robot in a waythat guarantees its dynamical balance while trying to follow the human references. In addition,the human operator needs some feedback about the state of the robot and its work site through remotesensors in order to comprehend the situation or feel physically present at the site, producingeffective robot behaviors. Complications arise when the communication network is non-ideal. Inthis case the commands from human to robot together with the feedback from robot to human canbe delayed. These delays can be very disturbing for the human operator, who cannot teleoperatetheir robot avatar in an effective way.Another crucial point to consider when setting up a teleoperation system is the large numberof parameters that have to be tuned to effectively control the teleoperated robots. Machinelearning approaches and stochastic optimizers can be used to automate the learning of some of theparameters.In this thesis, we proposed a teleoperation system that has been tested on the humanoid robotiCub. We used an inertial-technology-based motion capture suit as input device to control thehumanoid and a virtual reality headset connected to the robot cameras to get some visual feedback.We first translated the human movements into equivalent robot ones by developping a motionretargeting approach that achieves human-likeness while trying to ensure the feasibility of thetransferred motion. We then implemented a whole-body controller to enable the robot to trackthe retargeted human motion. The controller has been later optimized in simulation to achieve agood tracking of the whole-body reference movements, by recurring to a multi-objective stochasticoptimizer, which allowed us to find robust solutions working on the real robot in few trials.To teleoperate walking motions, we implemented a higher-level teleoperation mode in whichthe user can use a joystick to send reference commands to the robot. We integrated this setting inthe teleoperation system, which allows the user to switch between the two different modes.A major problem preventing the deployment of such systems in real applications is the presenceof communication delays between the human input and the feedback from the robot: evena few hundred milliseconds of delay can irremediably disturb the operator, let alone a few seconds.To overcome these delays, we introduced a system in which a humanoid robot executescommands before it actually receives them, so that the visual feedback appears to be synchronizedto the operator, whereas the robot executed the commands in the past. To do so, the robot continuouslypredicts future commands by querying a machine learning model that is trained on pasttrajectories and conditioned on the last received commands.Cette thĂšse vise Ă  Ă©tudier des systĂšmes et des outils pour la tĂ©lĂ©-opĂ©ration d’un robot humanoĂŻde.La tĂ©lĂ©opĂ©ration de robots est cruciale pour envoyer et contrĂŽler les robots dans des environnementsdangereux ou inaccessibles pour les humains (par exemple, des scĂ©narios d’interventionen cas de catastrophe, des environnements contaminĂ©s ou des sites extraterrestres). Le terme tĂ©lĂ©opĂ©rationdĂ©signe le plus souvent le contrĂŽle direct et continu d’un robot. Dans ce cas, l’opĂ©rateurhumain guide le mouvement du robot avec son propre mouvement physique ou via un dispositifde contrĂŽle. L’un des principaux dĂ©fis est de contrĂŽler le robot de maniĂšre Ă  garantir son Ă©quilibredynamique tout en essayant de suivre les rĂ©fĂ©rences humaines. De plus, l’opĂ©rateur humain abesoin d’un retour d’information sur l’état du robot et de son site via des capteurs Ă  distance afind’apprĂ©hender la situation ou de se sentir physiquement prĂ©sent sur le site, produisant des comportementsde robot efficaces. Des complications surviennent lorsque le rĂ©seau de communicationn’est pas idĂ©al. Dans ce cas, les commandes de l’homme au robot ainsi que la rĂ©troaction du robotĂ  l’homme peuvent ĂȘtre retardĂ©es. Ces dĂ©lais peuvent ĂȘtre trĂšs gĂȘnants pour l’opĂ©rateur humain,qui ne peut pas tĂ©lĂ©-opĂ©rer efficacement son avatar robotique.Un autre point crucial Ă  considĂ©rer lors de la mise en place d’un systĂšme de tĂ©lĂ©-opĂ©rationest le grand nombre de paramĂštres qui doivent ĂȘtre rĂ©glĂ©s pour contrĂŽler efficacement les robotstĂ©lĂ©-opĂ©rĂ©s. Des approches d’apprentissage automatique et des optimiseurs stochastiques peuventĂȘtre utilisĂ©s pour automatiser l’apprentissage de certains paramĂštres.Dans cette thĂšse, nous avons proposĂ© un systĂšme de tĂ©lĂ©-opĂ©ration qui a Ă©tĂ© testĂ© sur le robothumanoĂŻde iCub. Nous avons utilisĂ© une combinaison de capture de mouvement basĂ©e sur latechnologie inertielle comme pĂ©riphĂ©rique de contrĂŽle pour l’humanoĂŻde et un casque de rĂ©alitĂ©virtuelle connectĂ© aux camĂ©ras du robot pour obtenir un retour visuel. Nous avons d’abord traduitles mouvements humains en mouvements robotiques Ă©quivalents en dĂ©veloppant une approchede retargeting de mouvement qui atteint la ressemblance humaine tout en essayant d’assurer lafaisabilitĂ© du mouvement transfĂ©rĂ©. Nous avons ensuite implĂ©mentĂ© un contrĂŽleur du corps entierpour permettre au robot de suivre le mouvement humain reciblĂ©. Le contrĂŽleur a ensuite Ă©tĂ©optimisĂ© en simulation pour obtenir un bon suivi des mouvements de rĂ©fĂ©rence du corps entier,en recourant Ă  un optimiseur stochastique multi-objectifs, ce qui nous a permis de trouver dessolutions robustes fonctionnant sur le robot rĂ©el en quelques essais.Pour tĂ©lĂ©-opĂ©rer les mouvements de marche, nous avons implĂ©mentĂ© un mode de tĂ©lĂ©-opĂ©rationde niveau supĂ©rieur dans lequel l’utilisateur peut utiliser un joystick pour envoyer des commandesde rĂ©fĂ©rence au robot. Nous avons intĂ©grĂ© ce paramĂštre dans le systĂšme de tĂ©lĂ©-opĂ©ration, ce quipermet Ă  l’utilisateur de basculer entre les deux modes diffĂ©rents.Un problĂšme majeur empĂȘchant le dĂ©ploiement de tels systĂšmes dans des applications rĂ©ellesest la prĂ©sence de retards de communication entre l’entrĂ©e humaine et le retour du robot: mĂȘmequelques centaines de millisecondes de retard peuvent irrĂ©mĂ©diablement perturber l’opĂ©rateur,encore plus quelques secondes. Pour surmonter ces retards, nous avons introduit un systĂšme danslequel un robot humanoĂŻde exĂ©cute des commandes avant de les recevoir, de sorte que le retourvisuel semble ĂȘtre synchronisĂ© avec l’opĂ©rateur, alors que le robot exĂ©cutait les commandes dansle passĂ©. Pour ce faire, le robot prĂ©dit en permanence les commandes futures en interrogeant unmodĂšle d’apprentissage automatique formĂ© sur les trajectoires passĂ©es et conditionnĂ© aux derniĂšrescommandes reçues

    Metastable legged-robot locomotion

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 195-215).A variety of impressive approaches to legged locomotion exist; however, the science of legged robotics is still far from demonstrating a solution which performs with a level of flexibility, reliability and careful foot placement that would enable practical locomotion on the variety of rough and intermittent terrain humans negotiate with ease on a regular basis. In this thesis, we strive toward this particular goal by developing a methodology for designing control algorithms for moving a legged robot across such terrain in a qualitatively satisfying manner, without falling down very often. We feel the definition of a meaningful metric for legged locomotion is a useful goal in and of itself. Specifically, the mean first-passage time (MFPT), also called the mean time to failure (MTTF), is an intuitively practical cost function to optimize for a legged robot, and we present the reader with a systematic, mathematical process for obtaining estimates of this MFPT metric. Of particular significance, our models of walking on stochastically rough terrain generally result in dynamics with a fast mixing time, where initial conditions are largely "forgotten" within 1 to 3 steps. Additionally, we can often find a near-optimal solution for motion planning using only a short time-horizon look-ahead. Although we openly recognize that there are important classes of optimization problems for which long-term planning is required to avoid "running into a dead end" (or off of a cliff!), we demonstrate that many classes of rough terrain can in fact be successfully negotiated with a surprisingly high level of long-term reliability by selecting the short-sighted motion with the greatest probability of success. The methods used throughout have direct relevance to machine learning, providing a physics-based approach to reduce state space dimensionality and mathematical tools to obtain a scalar metric quantifying performance of the resulting reduced-order system.by Katie Byl.Ph.D

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man
    corecore