83 research outputs found

    Analytic and Learned Footstep Control for Robust Bipedal Walking

    Get PDF
    Bipedal walking is a complex, balance-critical whole-body motion with inherently unstable inverted pendulum-like dynamics. Strong disturbances must be quickly responded to by altering the walking motion and placing the next step in the right place at the right time. Unfortunately, the high number of degrees of freedom of the humanoid body makes the fast computation of well-placed steps a particularly challenging task. Sensor noise, imprecise actuation, and latency in the sensomotoric feedback loop impose further challenges when controlling real hardware. This dissertation addresses these challenges and describes a method of generating a robust walking motion for bipedal robots. Fast modification of footstep placement and timing allows agile control of the walking velocity and the absorption of strong disturbances. In a divide and conquer manner, the concepts of motion and balance are solved separately from each other, and consolidated in a way that a low-dimensional balance controller controls the timing and the footstep locations of a high-dimensional motion generator. Central pattern generated oscillatory motion signals are used for the synthesis of an open-loop stable walk on flat ground, which lacks the ability to respond to disturbances due to the absence of feedback. The Central Pattern Generator exhibits a low-dimensional parameter set to influence the timing and the landing coordinates of the swing foot. For balance control, a simple inverted pendulum-based physical model is used to represent the principal dynamics of walking. The model is robust to disturbances in a way that it returns to an ideal trajectory from a wide range of initial conditions by employing a combination of Zero Moment Point control, step timing, and foot placement strategies. The simulation of the model and its controller output are computed efficiently in closed form, supporting high-frequency balance control at the cost of an insignificant computational load. Additionally, the sagittal step size produced by the controller can be trained online during walking with a novel, gradient descent-based machine learning method. While the analytic controller forms the core of reliable walking, the trained sagittal step size complements the analytic controller in order to improve the overall walking performance. The balanced whole-body walking motion arises by using the footstep coordinates and the step timing predicted by the low-dimensional model as control input for the Central Pattern Generator. Real robot experiments are presented as evidence for disturbance-resistant, omnidirectional gait control, with arguably the strongest push-recovery capabilities to date

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion

    Humanoid robot control of complex postural tasks based on learning from demostration

    Get PDF
    Mención Internacional en el título de doctorThis thesis addresses the problem of planning and controlling complex tasks in a humanoid robot from a postural point of view. It is motivated by the growth of robotics in our current society, where simple robots are being integrated. Its objective is to make an advancement in the development of complex behaviors in humanoid robots, in order to allow them to share our environment in the future. The work presents different contributions in the areas of humanoid robot postural control, behavior planning, non-linear control, learning from demonstration and reinforcement learning. First, as an introduction of the thesis, a group of methods and mathematical formulations are presented, describing concepts such as humanoid robot modelling, generation of locomotion trajectories and generation of whole-body trajectories. Next, the process of human learning is studied in order to develop a novel method of postural task transference between a human and a robot. It uses the demonstrated action goal as a metrics of comparison, which is codified using the reward associated to the task execution. As an evolution of the previous study, this process is generalized to a set of sequential behaviors, which are executed by the robot based on human demonstrations. Afterwards, the execution of postural movements using a robust control approach is proposed. This method allows to control the desired trajectory even with mismatches in the robot model. Finally, an architecture that encompasses all methods of postural planning and control is presented. It is complemented by an environment recognition module that identifies the free space in order to perform path planning and generate safe movements for the robot. The experimental justification of this thesis was developed using the humanoid robot HOAP-3. Tasks such as walking, standing up from a chair, dancing or opening a door have been implemented using the techniques proposed in this work.Esta tesis aborda el problema de la planificación y control de tareas complejas de un robot humanoide desde el punto de vista postural. Viene motivada por el auge de la robótica en la sociedad actual, donde ya se están incorporando robots sencillos y su objetivo es avanzar en el desarrollo de comportamientos complejos en robots humanoides, para que en el futuro sean capaces de compartir nuestro entorno. El trabajo presenta diferentes contribuciones en las áreas de control postural de robots humanoides, planificación de comportamientos, control no lineal, aprendizaje por demostración y aprendizaje por refuerzo. En primer lugar se desarrollan un conjunto de métodos y formulaciones matemáticas sobre los que se sustenta la tesis, describiendo conceptos de modelado de robots humanoides, generación de trayectorias de locomoción y generación de trayectorias del cuerpo completo. A continuación se estudia el proceso de aprendizaje humano, para desarrollar un novedoso método de transferencia de una tarea postural de un humano a un robot, usando como métrica de comparación el objetivo de la acción demostrada, que es codificada a través del refuerzo asociado a la ejecución de dicha tarea. Como evolución del trabajo anterior, se generaliza este proceso para la realización de un conjunto de comportamientos secuenciales, que son de nuevo realizados por el robot basándose en las demostraciones de un ser humano. Seguidamente se estudia la ejecución de movimientos posturales utilizando un método de control robusto ante imprecisiones en el modelado del robot. Para analizar, se presenta una arquitectura que aglutina los métodos de planificación y el control postural desarrollados en los capítulos anteriores. Esto se complementa con un módulo de reconocimiento del entorno y extracción del espacio libre para poder planificar y generar movimientos seguros en dicho entorno. La justificación experimental de la tesis se ha desarrollado con el robot humanoide HOAP-3. En este robot se han implementado tareas como caminar, levantarse de una silla, bailar o abrir una puerta. Todo ello haciendo uso de las técnicas propuestas en este trabajo.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Manuel Ángel Armada Rodríguez.- Secretario: Luis Santiago Garrido Bullón.- Vocal: Sylvain Calino

    Human Behavior Understanding for Robotics

    Get PDF
    International audienceHuman behavior is complex, but structured along individual and social lines. Robotic systems interacting with people in uncontrolled environments need capabilities to correctly interpret, predict and respond to human behaviors. This paper discusses the scientific, technological and application challenges that arise from the mutual interaction of robotics and computational human behavior understanding. We supply a short survey of the area to provide a contextual framework and describe the most recent research in this area

    Towards Robust Bipedal Locomotion:From Simple Models To Full-Body Compliance

    Get PDF
    Thanks to better actuator technologies and control algorithms, humanoid robots to date can perform a wide range of locomotion activities outside lab environments. These robots face various control challenges like high dimensionality, contact switches during locomotion and a floating-base nature which makes them fall all the time. A rich set of sensory inputs and a high-bandwidth actuation are often needed to ensure fast and effective reactions to unforeseen conditions, e.g., terrain variations, external pushes, slippages, unknown payloads, etc. State of the art technologies today seem to provide such valuable hardware components. However, regarding software, there is plenty of room for improvement. Locomotion planning and control problems are often treated separately in conventional humanoid control algorithms. The control challenges mentioned above are probably the main reason for such separation. Here, planning refers to the process of finding consistent open-loop trajectories, which may take arbitrarily long computations off-line. Control, on the other hand, should be done very fast online to ensure stability. In this thesis, we want to link planning and control problems again and enable for online trajectory modification in a meaningful way. First, we propose a new way of describing robot geometries like molecules which breaks the complexity of conventional models. We use this technique and derive a planning algorithm that is fast enough to be used online for multi-contact motion planning. Similarly, we derive 3LP, a simplified linear three-mass model for bipedal walking, which offers orders of magnitude faster computations than full mechanical models. Next, we focus more on walking and use the 3LP model to formulate online control algorithms based on the foot-stepping strategy. The method is based on model predictive control, however, we also propose a faster controller with time-projection that demonstrates a close performance without numerical optimizations. We also deploy an efficient implementation of inverse dynamics together with advanced sensor fusion and actuator control algorithms to ensure a precise and compliant tracking of the simplified 3LP trajectories. Extensive simulations and hardware experiments on COMAN robot demonstrate effectiveness and strengths of our method. This thesis goes beyond humanoid walking applications. We further use the developed modeling tools to analyze and understand principles of human locomotion. Our 3LP model can describe the exchange of energy between human limbs in walking to some extent. We use this property to propose a metabolic-cost model of human walking which successfully describes trends in various conditions. The intrinsic power of the 3LP model to generate walking gaits in all these conditions makes it a handy solution for walking control and gait analysis, despite being yet a simplified model. To fill the reality gap, finally, we propose a kinematic conversion method that takes 3LP trajectories as input and generates more human-like postures. Using this method, the 3LP model, and the time-projecting controller, we introduce a graphical user interface in the end to simulate periodic and transient human-like walking conditions. We hope to use this combination in future to produce faster and more human-like walking gaits, possibly with more capable humanoid robots

    Dynamic Bipedal Locomotion: From Hybrid Zero Dynamics to Control Lyapunov Functions via Experimentally Realizable Methods

    Get PDF
    Robotic bipedal locomotion has become a rapidly growing field of research as humans increasingly look to augment their natural environments with intelligent machines. In order for these robotic systems to navigate the often unstructured environments of the world and perform tasks, they must first have the capability to dynamically, reliably, and efficiently locomote. Due to the inherently hybrid and underactuated nature of dynamic bipedal walking, the greatest experimental successes in the field have often been achieved by considering all aspects of the problem; with explicit consideration of the interplay between modeling, trajectory planning, and feedback control. The methodology and developments presented in this thesis begin with the modeling and design of dynamic walking gaits on bipedal robots through hybrid zero dynamics (HZD), a mathematical framework that utilizes hybrid system models coupled with nonlinear controllers that results in stable locomotion. This will form the first half of the thesis, and will be used to develop a solid foundation of HZD trajectory optimization tools and algorithms for efficient synthesis of accurate hybrid motion plans for locomotion on two underactuated and compliant 3D bipeds. While HZD and the associated trajectory optimization are an existing framework, the resulting behaviors shown in these preliminary experiments will extend the limits of what HZD has demonstrated is possible thus far in the literature. Specifically, the core results of this thesis demonstrate the first experimental multi-contact humanoid walking with HZD on the DURUS robot and then through the first compliant HZD motion library for walking over a continuum of walking speeds on the Cassie robot. On the theoretical front, a novel formulation of an optimization-based control framework is introduced that couples convergence constraints from control Lyapunov functions (CLF)s with desirable formulations existing in other areas of the bipedal locomotion field that have proven successful in practice, such as inverse dynamics control and quadratic programming approaches. The theoretical analysis and experimental validation of this controller thus forms the second half of this thesis. First, a theoretical analysis is developed which demonstrates several useful properties of the approach for tuning and implementation, and the stability of the controller for HZD locomotion is proven. This is then extended to a relaxed version of the CLF controller, which removes a convergence inequality constraint in lieu of a conservative CLF cost within a quadratic program to achieve tracking. It is then explored how this new CLF formulation can fully leverage the planned HZD walking gaits to achieve the target performance on physical hardware. Towards this goal, an experimental implementation of the CLF controller is derived for the Cassie robot, with the resulting experiments demonstrating the first successful realization of a CLF controller for a 3D biped on hardware in the literature. The accuracy of the robot model and synthesized HZD motion library allow the real-time control implementation to regularize the CLF optimization cost about the nominal walking gait. This drives the controller to choose smooth input torques and anticipated spring torques, as well as regulate an optimal distribution of feasible ground reaction forces on hardware while reliably tracking the planned virtual constraints. These final results demonstrate how each component of this thesis were brought together to form an effective end-to-end implementation of a nonlinear control framework for underactuated locomotion on a bipedal robot through modeling, trajectory optimization, and then ultimately real-time control.</p

    Interactive Imitation Learning in Robotics: A Survey

    Full text link
    Interactive Imitation Learning (IIL) is a branch of Imitation Learning (IL) where human feedback is provided intermittently during robot execution allowing an online improvement of the robot's behavior. In recent years, IIL has increasingly started to carve out its own space as a promising data-driven alternative for solving complex robotic tasks. The advantages of IIL are its data-efficient, as the human feedback guides the robot directly towards an improved behavior, and its robustness, as the distribution mismatch between the teacher and learner trajectories is minimized by providing feedback directly over the learner's trajectories. Nevertheless, despite the opportunities that IIL presents, its terminology, structure, and applicability are not clear nor unified in the literature, slowing down its development and, therefore, the research of innovative formulations and discoveries. In this article, we attempt to facilitate research in IIL and lower entry barriers for new practitioners by providing a survey of the field that unifies and structures it. In addition, we aim to raise awareness of its potential, what has been accomplished and what are still open research questions. We organize the most relevant works in IIL in terms of human-robot interaction (i.e., types of feedback), interfaces (i.e., means of providing feedback), learning (i.e., models learned from feedback and function approximators), user experience (i.e., human perception about the learning process), applications, and benchmarks. Furthermore, we analyze similarities and differences between IIL and RL, providing a discussion on how the concepts offline, online, off-policy and on-policy learning should be transferred to IIL from the RL literature. We particularly focus on robotic applications in the real world and discuss their implications, limitations, and promising future areas of research

    Dynamic Balance and Gait Metrics for Robotic Bipeds

    Get PDF
    For legged robots to be useful in the real world, they must be able to balance and walk reliably. Both of these abilities improve when a system is more effective at moving itself around relative to its contacts (i.e., its feet). Achieving this type of movement depends both on the controller used to perform the motion and the physical properties of the system. Although much work has been done on the development of dynamic controllers for balance and gait, only limited research exists on how to quantify a system’s physical balance capabilities or how to modify the system to improve those capabilities. From the control perspective, there are three strategies for maintaining balance in bipeds: flexing, leaning, and stepping. Both stepping and leaning strategies typically depend on balance points (critical points used for maintaining or regaining balance) to determine whether or not a step is needed, and if so, where to step. Although several balance point estimators exist, the majority of these methods make undesirable assumptions (e.g., ignoring the impact dynamics, assuming massless legs, planar motion, etc.). From the physical design perspective, one promising approach for analyzing system performance is a set of dynamic ratios called velocity and momentum gains, which are dependent only on the (scale-invariant) dynamic parameters and instantaneous configuration of a system, enabling entire classes of mechanisms to be analyzed at the same time. This thesis makes four key contributions towards improving biped balancing capabilities. First, a dynamic bipedal controller is proposed which uses a 3D balance point estimator both to respond to disturbances and produce reliable stepping. Second, a novel balance point estimator is proposed that facilitates stepping while combining and expanding the features of existing 2D and 3D estimators to produce a generalized 3D formulation. Third, the momentum gain formulation is extended to general 2D and 3D systems, then both gains are compared to centroidal momentum via a spatial formulation and incorporated into a generalized gain definition. Finally, the gains are used as a metric in an optimization framework to design parameterized balancing mechanisms within a given configuration space. Effectively, this enables an optimization of how well a system could balance without the need to pre-specify or co-generate controllers and/or trajectories. To validate the control contributions, simulated bipeds are subjected to external disturbances while standing still and walking. For the gain contributions, the framework is used to compare gain-optimized mechanisms to those based on the cost of transport metric. Through the combination of gain-based physical design optimization and the use of predictive, real-time balance point estimators within dynamic controllers, bipeds and other legged systems will soon be able to achieve reliable balance and gait in the real world

    Human-Inspired Balancing and Recovery Stepping for Humanoid Robots

    Get PDF
    Robustly maintaining balance on two legs is an important challenge for humanoid robots. The work presented in this book represents a contribution to this area. It investigates efficient methods for the decision-making from internal sensors about whether and where to step, several improvements to efficient whole-body postural balancing methods, and proposes and evaluates a novel method for efficient recovery step generation, leveraging human examples and simulation-based reinforcement learning
    corecore