472 research outputs found

    Gait generation via intrinsically stable MPC for a multi-mass humanoid model

    Get PDF
    We consider the problem of generating a gait with no a priori assigned footsteps while taking into account the contribution of the swinging leg to the total Zero Moment Point (ZMP). This is achieved by considering a multi-mass model of the humanoid and distinguishing between secondary masses with known pre-defined motion and the remaining, primary, masses. In the case of a single primary mass with constant height, it is possible to transform the original gait generation problem for the multi-mass system into a single LIP-like problem. We can then take full advantage of an intrinsically stable MPC framework to generate a gait that takes into account the swinging leg motion

    Generation of dynamic motion for anthropomorphic systems under prioritized equality and inequality constraints

    Get PDF
    In this paper, we propose a solution to compute full-dynamic motions for a humanoid robot, accounting for various kinds of constraints such as dynamic balance or joint limits. As a first step, we propose a unification of task-based control schemes, in inverse kinematics or inverse dynamics. Based on this unification, we generalize the cascade of quadratic programs that were developed for inverse kinematics only. Then, we apply the solution to generate, in simulation, wholebody motions for a humanoid robot in unilateral contact with the ground, while ensuring the dynamic balance on a non horizontal surface

    ZMP Constraint Restriction for Robust Gait Generation in Humanoids

    Get PDF
    We present an extension of our previously proposed IS-MPC method for humanoid gait generation aimed at obtaining robust performance in the presence of disturbances. The considered disturbance signals vary in a range of known amplitude around a mid-range value that can change at each sampling time, but whose current value is assumed to be available. The method consists in modifying the stability constraint that is at the core of IS-MPC by incorporating the current mid-range disturbance, and performing an appropriate restriction of the ZMP constraint in the control horizon on the basis of the range amplitude of the disturbance. We derive explicit conditions for recursive feasibility and internal stability of the IS-MPC method with constraint modification. Finally, we illustrate its superior performance with respect to the nominal version by performing dynamic simulations on the NAO robot

    A framework for safe human-humanoid coexistence

    Get PDF
    This work is focused on the development of a safety framework for Human-Humanoid coexistence, with emphasis on humanoid locomotion. After a brief introduction to the fundamental concepts of humanoid locomotion, the two most common approaches for gait generation are presented, and are extended with the inclusion of a stability condition to guarantee the boundedness of the generated trajectories. Then the safety framework is presented, with the introduction of different safety behaviors. These behaviors are meant to enhance the overall level of safety during any robot operation. Proactive behaviors will enhance or adapt the current robot operations to reduce the risk of danger, while override behaviors will stop the current robot activity in order to take action against a particularly dangerous situation. A state machine is defined to control the transitions between the behaviors. The behaviors that are strictly related to locomotion are subsequently detailed, and an implementation is proposed and validated. A possible implementation of the remaining behaviors is proposed through the review of related works that can be found in literature

    Superando la brecha de la realidad: Algoritmos de aprendizaje por imitación y por refuerzos para problemas de locomoción robótica bípeda

    Get PDF
    ilustraciones, diagramas, fotografíasEsta tesis presenta una estrategia de entrenamiento de robots que utiliza técnicas de aprendizaje artificial para optimizar el rendimiento de los robots en tareas complejas. Motivado por los impresionantes logros recientes en el aprendizaje automático, especialmente en juegos y escenarios virtuales, el proyecto tiene como objetivo explorar el potencial de estas técnicas para mejorar las capacidades de los robots más allá de la programación humana tradicional a pesar de las limitaciones impuestas por la brecha de la realidad. El caso de estudio seleccionado para esta investigación es la locomoción bípeda, ya que permite dilucidar los principales desafíos y ventajas de utilizar métodos de aprendizaje artificial para el aprendizaje de robots. La tesis identifica cuatro desafíos principales en este contexto: la variabilidad de los resultados obtenidos de los algoritmos de aprendizaje artificial, el alto costo y riesgo asociado con la realización de experimentos en robots reales, la brecha entre la simulación y el comportamiento del mundo real, y la necesidad de adaptar los patrones de movimiento humanos a los sistemas robóticos. La propuesta consiste en tres módulos principales para abordar estos desafíos: Enfoques de Control No Lineal, Aprendizaje por Imitación y Aprendizaje por Reforzamiento. El módulo de Enfoques de Control No Lineal establece una base al modelar robots y emplear técnicas de control bien establecidas. El módulo de Aprendizaje por Imitación utiliza la imitación para generar políticas iniciales basadas en datos de captura de movimiento de referencia o resultados preliminares de políticas para crear patrones de marcha similares a los humanos y factibles. El módulo de Aprendizaje por Refuerzos complementa el proceso mejorando de manera iterativa las políticas paramétricas, principalmente a través de la simulación pero con el rendimiento en el mundo real como objetivo final. Esta tesis enfatiza la modularidad del enfoque, permitiendo la implementación de los módulos individuales por separado o su combinación para determinar la estrategia más efectiva para diferentes escenarios de entrenamiento de robots. Al utilizar una combinación de técnicas de control establecidas, aprendizaje por imitación y aprendizaje por refuerzos, la estrategia de entrenamiento propuesta busca desbloquear el potencial para que los robots alcancen un rendimiento optimizado en tareas complejas, contribuyendo al avance de la inteligencia artificial en la robótica no solo en sistemas virtuales sino en sistemas reales.The thesis introduces a comprehensive robot training framework that utilizes artificial learning techniques to optimize robot performance in complex tasks. Motivated by recent impressive achievements in machine learning, particularly in games and virtual scenarios, the project aims to explore the potential of these techniques for improving robot capabilities beyond traditional human programming. The case study selected for this investigation is bipedal locomotion, as it allows for elucidating key challenges and advantages of using artificial learning methods for robot learning. The thesis identifies four primary challenges in this context: the variability of results obtained from artificial learning algorithms, the high cost and risk associated with conducting experiments on real robots, the reality gap between simulation and real-world behavior, and the need to adapt human motion patterns to robotic systems. The proposed approach consists of three main modules to address these challenges: Non-linear Control Approaches, Imitation Learning, and Reinforcement Learning. The Non-linear Control module establishes a foundation by modeling robots and employing well-established control techniques. The Imitation Learning module utilizes imitation to generate initial policies based on reference motion capture data or preliminary policy results to create feasible human-like gait patterns. The Reinforcement Learning module complements the process by iteratively improving parametric policies, primarily through simulation but ultimately with real-world performance as the ultimate goal. The thesis emphasizes the modularity of the approach, allowing for the implementation of individual modules separately or their combination to determine the most effective strategy for different robot training scenarios. By employing a combination of established control techniques, imitation learning, and reinforcement learning, the framework seeks to unlock the potential for robots to achieve optimized performances in complex tasks, contributing to the advancement of artificial intelligence in robotics.DoctoradoDoctor en ingeniería mecánica y mecatrónic
    corecore