108 research outputs found

    Intelligent approaches in locomotion - a review

    Get PDF

    Generating whole body movements for dynamics anthropomorphic systems under constraints

    Get PDF
    Cette thèse étudie la question de la génération de mouvements corps-complet pour des systèmes anthropomorphes. Elle considère le problème de la modélisation et de la commande en abordant la question difficile de la génération de mouvements ressemblant à ceux de l'homme. En premier lieu, un modèle dynamique du robot humanoïde HRP-2 est élaboré à partir de l'algorithme récursif de Newton-Euler pour les vecteurs spatiaux. Un nouveau schéma de commande dynamique est ensuite développé, en utilisant une cascade de programmes quadratiques (QP) optimisant des fonctions coûts et calculant les couples de commande en satisfaisant des contraintes d'égalité et d'inégalité. La cascade de problèmes quadratiques est définie par une pile de tâches associée à un ordre de priorité. Nous proposons ensuite une formulation unifiée des contraintes de contacts planaires et nous montrons que la méthode proposée permet de prendre en compte plusieurs contacts non coplanaires et généralise la contrainte usuelle du ZMP dans le cas où seulement les pieds sont en contact avec le sol. Nous relions ensuite les algorithmes de génération de mouvement issus de la robotique aux outils de capture du mouvement humain en développant une méthode originale de génération de mouvement visant à imiter le mouvement humain. Cette méthode est basée sur le recalage des données capturées et l'édition du mouvement en utilisant le solveur hiérarchique précédemment introduit et la définition de tâches et de contraintes dynamiques. Cette méthode originale permet d'ajuster un mouvement humain capturé pour le reproduire fidèlement sur un humanoïde en respectant sa propre dynamique. Enfin, dans le but de simuler des mouvements qui ressemblent à ceux de l'homme, nous développons un modèle anthropomorphe ayant un nombre de degrés de liberté supérieur à celui du robot humanoïde HRP2. Le solveur générique est utilisé pour simuler le mouvement sur ce nouveau modèle. Une série de tâches est définie pour décrire un scénario joué par un humain. Nous montrons, par une simple analyse qualitative du mouvement, que la prise en compte du modèle dynamique permet d'accroitre naturellement le réalisme du mouvement.This thesis studies the question of whole body motion generation for anthropomorphic systems. Within this work, the problem of modeling and control is considered by addressing the difficult issue of generating human-like motion. First, a dynamic model of the humanoid robot HRP-2 is elaborated based on the recursive Newton-Euler algorithm for spatial vectors. A new dynamic control scheme is then developed adopting a cascade of quadratic programs (QP) optimizing the cost functions and computing the torque control while satisfying equality and inequality constraints. The cascade of the quadratic programs is defined by a stack of tasks associated to a priority order. Next, we propose a unified formulation of the planar contact constraints, and we demonstrate that the proposed method allows taking into account multiple non coplanar contacts and generalizes the common ZMP constraint when only the feet are in contact with the ground. Then, we link the algorithms of motion generation resulting from robotics to the human motion capture tools by developing an original method of motion generation aiming at the imitation of the human motion. This method is based on the reshaping of the captured data and the motion editing by using the hierarchical solver previously introduced and the definition of dynamic tasks and constraints. This original method allows adjusting a captured human motion in order to reliably reproduce it on a humanoid while respecting its own dynamics. Finally, in order to simulate movements resembling to those of humans, we develop an anthropomorphic model with higher number of degrees of freedom than the one of HRP-2. The generic solver is used to simulate motion on this new model. A sequence of tasks is defined to describe a scenario played by a human. By a simple qualitative analysis of motion, we demonstrate that taking into account the dynamics provides a natural way to generate human-like movements

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion

    Humanoid robot control of complex postural tasks based on learning from demostration

    Get PDF
    Mención Internacional en el título de doctorThis thesis addresses the problem of planning and controlling complex tasks in a humanoid robot from a postural point of view. It is motivated by the growth of robotics in our current society, where simple robots are being integrated. Its objective is to make an advancement in the development of complex behaviors in humanoid robots, in order to allow them to share our environment in the future. The work presents different contributions in the areas of humanoid robot postural control, behavior planning, non-linear control, learning from demonstration and reinforcement learning. First, as an introduction of the thesis, a group of methods and mathematical formulations are presented, describing concepts such as humanoid robot modelling, generation of locomotion trajectories and generation of whole-body trajectories. Next, the process of human learning is studied in order to develop a novel method of postural task transference between a human and a robot. It uses the demonstrated action goal as a metrics of comparison, which is codified using the reward associated to the task execution. As an evolution of the previous study, this process is generalized to a set of sequential behaviors, which are executed by the robot based on human demonstrations. Afterwards, the execution of postural movements using a robust control approach is proposed. This method allows to control the desired trajectory even with mismatches in the robot model. Finally, an architecture that encompasses all methods of postural planning and control is presented. It is complemented by an environment recognition module that identifies the free space in order to perform path planning and generate safe movements for the robot. The experimental justification of this thesis was developed using the humanoid robot HOAP-3. Tasks such as walking, standing up from a chair, dancing or opening a door have been implemented using the techniques proposed in this work.Esta tesis aborda el problema de la planificación y control de tareas complejas de un robot humanoide desde el punto de vista postural. Viene motivada por el auge de la robótica en la sociedad actual, donde ya se están incorporando robots sencillos y su objetivo es avanzar en el desarrollo de comportamientos complejos en robots humanoides, para que en el futuro sean capaces de compartir nuestro entorno. El trabajo presenta diferentes contribuciones en las áreas de control postural de robots humanoides, planificación de comportamientos, control no lineal, aprendizaje por demostración y aprendizaje por refuerzo. En primer lugar se desarrollan un conjunto de métodos y formulaciones matemáticas sobre los que se sustenta la tesis, describiendo conceptos de modelado de robots humanoides, generación de trayectorias de locomoción y generación de trayectorias del cuerpo completo. A continuación se estudia el proceso de aprendizaje humano, para desarrollar un novedoso método de transferencia de una tarea postural de un humano a un robot, usando como métrica de comparación el objetivo de la acción demostrada, que es codificada a través del refuerzo asociado a la ejecución de dicha tarea. Como evolución del trabajo anterior, se generaliza este proceso para la realización de un conjunto de comportamientos secuenciales, que son de nuevo realizados por el robot basándose en las demostraciones de un ser humano. Seguidamente se estudia la ejecución de movimientos posturales utilizando un método de control robusto ante imprecisiones en el modelado del robot. Para analizar, se presenta una arquitectura que aglutina los métodos de planificación y el control postural desarrollados en los capítulos anteriores. Esto se complementa con un módulo de reconocimiento del entorno y extracción del espacio libre para poder planificar y generar movimientos seguros en dicho entorno. La justificación experimental de la tesis se ha desarrollado con el robot humanoide HOAP-3. En este robot se han implementado tareas como caminar, levantarse de una silla, bailar o abrir una puerta. Todo ello haciendo uso de las técnicas propuestas en este trabajo.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Manuel Ángel Armada Rodríguez.- Secretario: Luis Santiago Garrido Bullón.- Vocal: Sylvain Calino

    Real-time full body motion imitation on the COMAN humanoid robot

    Get PDF
    On-line full body imitation with a humanoid robot standing on its own two feet requires simultaneously maintaining the balance and imitating the motion of the demonstrator. In this paper we present a method that allows real-time motion imitation while maintaining stability, based on prioritized task control. We also describe a method of modified prioritized kinematic control that constrains the imitated motion to preserve stability only when the robot would tip over, but does not alter the motions otherwise. To cope with the passive compliance of the robot, we show how to model the estimation of the center of mass of the robot using support vector machines. In the paper we give detailed description of all steps of the algorithm, essentially providing a tutorial on the implementation of kinematic stability control. We present the results on a child-sized humanoid robot called Compliant Humanoid Platform or COMAN. Our implementation shows reactive and stable on-line motion imitation of the humanoid robot

    Modeling of human movement for the generation of humanoid robot motion

    Get PDF
    La robotique humanoïde arrive a maturité avec des robots plus rapides et plus précis. Pour faire face à la complexité mécanique, la recherche a commencé à regarder au-delà du cadre habituel de la robotique, vers les sciences de la vie, afin de mieux organiser le contrôle du mouvement. Cette thèse explore le lien entre mouvement humain et le contrôle des systèmes anthropomorphes tels que les robots humanoïdes. Tout d’abord, en utilisant des méthodes classiques de la robotique, telles que l’optimisation, nous étudions les principes qui sont à la base de mouvements répétitifs humains, tels que ceux effectués lorsqu’on joue au yoyo. Nous nous concentrons ensuite sur la locomotion en nous inspirant de résultats en neurosciences qui mettent en évidence le rôle de la tête dans la marche humaine. En développant une interface permettant à un utilisateur de commander la tête du robot, nous proposons une méthode de contrôle du mouvement corps-complet d’un robot humanoïde, incluant la production de pas et permettant au corps de suivre le mouvement de la tête. Cette idée est poursuivie dans l’étude finale dans laquelle nous analysons la locomotion de sujets humains, dirigée vers une cible, afin d’extraire des caractéristiques du mouvement sous forme invariants. En faisant le lien entre la notion “d’invariant” en neurosciences et celle de “tâche cinématique” en robotique humanoïde, nous développons une méthode pour produire une locomotion réaliste pour d’autres systèmes anthropomorphes. Dans ce cas, les résultats sont illustrés sur le robot humanoïde HRP2 du LAAS-CNRS. La contribution générale de cette thèse est de montrer que, bien que la planification de mouvement pour les robots humanoïdes peut être traitée par des méthodes classiques de robotique, la production de mouvements réalistes nécessite de combiner ces méthodes à l’observation systématique et formelle du comportement humain. ABSTRACT : Humanoid robotics is coming of age with faster and more agile robots. To compliment the physical complexity of humanoid robots, the robotics algorithms being developed to derive their motion have also become progressively complex. The work in this thesis spans across two research fields, human neuroscience and humanoid robotics, and brings some ideas from the former to aid the latter. By exploring the anthropological link between the structure of a human and that of a humanoid robot we aim to guide conventional robotics methods like local optimization and task-based inverse kinematics towards more realistic human-like solutions. First, we look at dynamic manipulation of human hand trajectories while playing with a yoyo. By recording human yoyo playing, we identify the control scheme used as well as a detailed dynamic model of the hand-yoyo system. Using optimization this model is then used to implement stable yoyo-playing within the kinematic and dynamic limits of the humanoid HRP-2. The thesis then extends its focus to human and humanoid locomotion. We take inspiration from human neuroscience research on the role of the head in human walking and implement a humanoid robotics analogy to this. By allowing a user to steer the head of a humanoid, we develop a control method to generate deliberative whole-body humanoid motion including stepping, purely as a consequence of the head movement. This idea of understanding locomotion as a consequence of reaching a goal is extended in the final study where we look at human motion in more detail. Here, we aim to draw to a link between “invariants” in neuroscience and “kinematic tasks” in humanoid robotics. We record and extract stereotypical characteristics of human movements during a walking and grasping task. These results are then normalized and generalized such that they can be regenerated for other anthropomorphic figures with different kinematic limits than that of humans. The final experiments show a generalized stack of tasks that can generate realistic walking and grasping motion for the humanoid HRP-2. The general contribution of this thesis is in showing that while motion planning for humanoid robots can be tackled by classical methods of robotics, the production of realistic movements necessitate the combination of these methods with the systematic and formal observation of human behavior

    Learning-based methods for planning and control of humanoid robots

    Get PDF
    Nowadays, humans and robots are more and more likely to coexist as time goes by. The anthropomorphic nature of humanoid robots facilitates physical human-robot interaction, and makes social human-robot interaction more natural. Moreover, it makes humanoids ideal candidates for many applications related to tasks and environments designed for humans. No matter the application, an ubiquitous requirement for the humanoid is to possess proper locomotion skills. Despite long-lasting research, humanoid locomotion is still far from being a trivial task. A common approach to address humanoid locomotion consists in decomposing its complexity by means of a model-based hierarchical control architecture. To cope with computational constraints, simplified models for the humanoid are employed in some of the architectural layers. At the same time, the redundancy of the humanoid with respect to the locomotion task as well as the closeness of such a task to human locomotion suggest a data-driven approach to learn it directly from experience. This thesis investigates the application of learning-based techniques to planning and control of humanoid locomotion. In particular, both deep reinforcement learning and deep supervised learning are considered to address humanoid locomotion tasks in a crescendo of complexity. First, we employ deep reinforcement learning to study the spontaneous emergence of balancing and push recovery strategies for the humanoid, which represent essential prerequisites for more complex locomotion tasks. Then, by making use of motion capture data collected from human subjects, we employ deep supervised learning to shape the robot walking trajectories towards an improved human-likeness. The proposed approaches are validated on real and simulated humanoid robots. Specifically, on two versions of the iCub humanoid: iCub v2.7 and iCub v3

    Locomoção bípede adaptativa a partir de uma única demonstração usando primitivas de movimento

    Get PDF
    Doutoramento em Engenharia EletrotécnicaEste trabalho aborda o problema de capacidade de imitação da locomoção humana através da utilização de trajetórias de baixo nível codificadas com primitivas de movimento e utilizá-las para depois generalizar para novas situações, partindo apenas de uma demonstração única. Assim, nesta linha de pensamento, os principais objetivos deste trabalho são dois: o primeiro é analisar, extrair e codificar demonstrações efetuadas por um humano, obtidas por um sistema de captura de movimento de forma a modelar tarefas de locomoção bípede. Contudo, esta transferência não está limitada à simples reprodução desses movimentos, requerendo uma evolução das capacidades para adaptação a novas situações, assim como lidar com perturbações inesperadas. Assim, o segundo objetivo é o desenvolvimento e avaliação de uma estrutura de controlo com capacidade de modelação das ações, de tal forma que a demonstração única apreendida possa ser modificada para o robô se adaptar a diversas situações, tendo em conta a sua dinâmica e o ambiente onde está inserido. A ideia por detrás desta abordagem é resolver o problema da generalização a partir de uma demonstração única, combinando para isso duas estruturas básicas. A primeira consiste num sistema gerador de padrões baseado em primitivas de movimento utilizando sistemas dinâmicos (DS). Esta abordagem de codificação de movimentos possui propriedades desejáveis que a torna ideal para geração de trajetórias, tais como a possibilidade de modificar determinados parâmetros em tempo real, tais como a amplitude ou a frequência do ciclo do movimento e robustez a pequenas perturbações. A segunda estrutura, que está embebida na anterior, é composta por um conjunto de osciladores acoplados em fase que organizam as ações de unidades funcionais de forma coordenada. Mudanças em determinadas condições, como o instante de contacto ou impactos com o solo, levam a modelos com múltiplas fases. Assim, em vez de forçar o movimento do robô a situações pré-determinadas de forma temporal, o gerador de padrões de movimento proposto explora a transição entre diferentes fases que surgem da interação do robô com o ambiente, despoletadas por eventos sensoriais. A abordagem proposta é testada numa estrutura de simulação dinâmica, sendo que várias experiências são efetuadas para avaliar os métodos e o desempenho dos mesmos.This work addresses the problem of learning to imitate human locomotion actions through low-level trajectories encoded with motion primitives and generalizing them to new situations from a single demonstration. In this line of thought, the main objectives of this work are twofold: The first is to analyze, extract and encode human demonstrations taken from motion capture data in order to model biped locomotion tasks. However, transferring motion skills from humans to robots is not limited to the simple reproduction, but requires the evaluation of their ability to adapt to new situations, as well as to deal with unexpected disturbances. Therefore, the second objective is to develop and evaluate a control framework for action shaping such that the single-demonstration can be modulated to varying situations, taking into account the dynamics of the robot and its environment. The idea behind the approach is to address the problem of generalization from a single-demonstration by combining two basic structures. The first structure is a pattern generator system consisting of movement primitives learned and modelled by dynamical systems (DS). This encoding approach possesses desirable properties that make them well-suited for trajectory generation, namely the possibility to change parameters online such as the amplitude and the frequency of the limit cycle and the intrinsic robustness against small perturbations. The second structure, which is embedded in the previous one, consists of coupled phase oscillators that organize actions into functional coordinated units. The changing contact conditions plus the associated impacts with the ground lead to models with multiple phases. Instead of forcing the robot’s motion into a predefined fixed timing, the proposed pattern generator explores transition between phases that emerge from the interaction of the robot system with the environment, triggered by sensor-driven events. The proposed approach is tested in a dynamics simulation framework and several experiments are conducted to validate the methods and to assess the performance of a humanoid robot

    Negotiating Large Obstacles with a Humanoid Robot via Multi-Contact Motion Planning

    Get PDF
    Incremental progress in humanoid robot locomotion over the years has achieved essential capabilities such as navigation over at or uneven terrain, stepping over small obstacles and imbing stairls. However, the locomotion research has mostly been limited to using only bipedal gait and only foot contacts with the environment, using the upper body for balancing without considering additional external contacts. As a result, challenging locomotion tasks like climbing over large obstacles relative to the size of the robot have remained unsolved. In this paper, we address this class of open problems with an approach based on multi-contact motion planning, guided by physical human demonstrations. Our goal is to make humanoid locomotion problem more tractable by taking advantage of objects in the surrounding environment instead of avoiding them. We propose a multi-contact motion planning algorithm for humanoid robot locomotion which exploits the multi-contacts at the upper and lower body limbs. We propose a contact stability measure, which simplies the contact search from demonstration and contact transition motion generation for the multi-contact motion planning algorithm. The algorithm uses the whole-body motions generated via Quadratic Programming (QP) based solver methods. The multi-contact motion planning algorithm is applied for a challenging task of climbing over a relatively larger obstacle compared to the robot. We validate our planning approach with simulations and experiments for climbing over a large wooden obstacle with COMAN, which is a complaint humanoid robot with 23 degrees of freedom (DOF). We also propose a generalization method, the \Policy-Contraction Learning Method" to extend the algorithm for generating new multi-contact plans for our multi-contact motion planner, that can adapt to changes in the environment. The method learns a general policy and the multi-contact behavior from the human demonstrations, for generating new multi-contact plans for the obstacle-negotiation
    corecore