424 research outputs found

    Analytic and Learned Footstep Control for Robust Bipedal Walking

    Get PDF
    Bipedal walking is a complex, balance-critical whole-body motion with inherently unstable inverted pendulum-like dynamics. Strong disturbances must be quickly responded to by altering the walking motion and placing the next step in the right place at the right time. Unfortunately, the high number of degrees of freedom of the humanoid body makes the fast computation of well-placed steps a particularly challenging task. Sensor noise, imprecise actuation, and latency in the sensomotoric feedback loop impose further challenges when controlling real hardware. This dissertation addresses these challenges and describes a method of generating a robust walking motion for bipedal robots. Fast modification of footstep placement and timing allows agile control of the walking velocity and the absorption of strong disturbances. In a divide and conquer manner, the concepts of motion and balance are solved separately from each other, and consolidated in a way that a low-dimensional balance controller controls the timing and the footstep locations of a high-dimensional motion generator. Central pattern generated oscillatory motion signals are used for the synthesis of an open-loop stable walk on flat ground, which lacks the ability to respond to disturbances due to the absence of feedback. The Central Pattern Generator exhibits a low-dimensional parameter set to influence the timing and the landing coordinates of the swing foot. For balance control, a simple inverted pendulum-based physical model is used to represent the principal dynamics of walking. The model is robust to disturbances in a way that it returns to an ideal trajectory from a wide range of initial conditions by employing a combination of Zero Moment Point control, step timing, and foot placement strategies. The simulation of the model and its controller output are computed efficiently in closed form, supporting high-frequency balance control at the cost of an insignificant computational load. Additionally, the sagittal step size produced by the controller can be trained online during walking with a novel, gradient descent-based machine learning method. While the analytic controller forms the core of reliable walking, the trained sagittal step size complements the analytic controller in order to improve the overall walking performance. The balanced whole-body walking motion arises by using the footstep coordinates and the step timing predicted by the low-dimensional model as control input for the Central Pattern Generator. Real robot experiments are presented as evidence for disturbance-resistant, omnidirectional gait control, with arguably the strongest push-recovery capabilities to date

    Learning-based methods for planning and control of humanoid robots

    Get PDF
    Nowadays, humans and robots are more and more likely to coexist as time goes by. The anthropomorphic nature of humanoid robots facilitates physical human-robot interaction, and makes social human-robot interaction more natural. Moreover, it makes humanoids ideal candidates for many applications related to tasks and environments designed for humans. No matter the application, an ubiquitous requirement for the humanoid is to possess proper locomotion skills. Despite long-lasting research, humanoid locomotion is still far from being a trivial task. A common approach to address humanoid locomotion consists in decomposing its complexity by means of a model-based hierarchical control architecture. To cope with computational constraints, simplified models for the humanoid are employed in some of the architectural layers. At the same time, the redundancy of the humanoid with respect to the locomotion task as well as the closeness of such a task to human locomotion suggest a data-driven approach to learn it directly from experience. This thesis investigates the application of learning-based techniques to planning and control of humanoid locomotion. In particular, both deep reinforcement learning and deep supervised learning are considered to address humanoid locomotion tasks in a crescendo of complexity. First, we employ deep reinforcement learning to study the spontaneous emergence of balancing and push recovery strategies for the humanoid, which represent essential prerequisites for more complex locomotion tasks. Then, by making use of motion capture data collected from human subjects, we employ deep supervised learning to shape the robot walking trajectories towards an improved human-likeness. The proposed approaches are validated on real and simulated humanoid robots. Specifically, on two versions of the iCub humanoid: iCub v2.7 and iCub v3

    Thermal Recovery of Multi-Limbed Robots with Electric Actuators

    Get PDF
    The problem of finding thermally minimizing configurations of a humanoid robot to recover its actuators from unsafe thermal states is addressed. A first-order, data-driven, effort based, thermal model of the robots actuators is devised, which is used to predict future thermal states. Given this predictive capability, a map between configurations and future temperatures is formulated to find what configurations, subject to valid contact constraints, can be taken now to minimize future thermal states. Effectively, this approach is a realization of a contact-constrained thermal inverse-kinematics (IK) process. Experimental validation of the proposed approach is performed on the NASA Valkyrie robot hardware

    Hierarchical Control for Bipedal Locomotion using Central Pattern Generators and Neural Networks

    Full text link
    The complexity of bipedal locomotion may be attributed to the difficulty in synchronizing joint movements while at the same time achieving high-level objectives such as walking in a particular direction. Artificial central pattern generators (CPGs) can produce synchronized joint movements and have been used in the past for bipedal locomotion. However, most existing CPG-based approaches do not address the problem of high-level control explicitly. We propose a novel hierarchical control mechanism for bipedal locomotion where an optimized CPG network is used for joint control and a neural network acts as a high-level controller for modulating the CPG network. By separating motion generation from motion modulation, the high-level controller does not need to control individual joints directly but instead can develop to achieve a higher goal using a low-dimensional control signal. The feasibility of the hierarchical controller is demonstrated through simulation experiments using the Neuro-Inspired Companion (NICO) robot. Experimental results demonstrate the controller's ability to function even without the availability of an exact robot model.Comment: In: Proceedings of the Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob), Oslo, Norway, Aug. 19-22, 201

    Locomoção de humanoides robusta e versátil baseada em controlo analítico e física residual

    Get PDF
    Humanoid robots are made to resemble humans but their locomotion abilities are far from ours in terms of agility and versatility. When humans walk on complex terrains or face external disturbances, they combine a set of strategies, unconsciously and efficiently, to regain stability. This thesis tackles the problem of developing a robust omnidirectional walking framework, which is able to generate versatile and agile locomotion on complex terrains. We designed and developed model-based and model-free walk engines and formulated the controllers using different approaches including classical and optimal control schemes and validated their performance through simulations and experiments. These frameworks have hierarchical structures that are composed of several layers. These layers are composed of several modules that are connected together to fade the complexity and increase the flexibility of the proposed frameworks. Additionally, they can be easily and quickly deployed on different platforms. Besides, we believe that using machine learning on top of analytical approaches is a key to open doors for humanoid robots to step out of laboratories. We proposed a tight coupling between analytical control and deep reinforcement learning. We augmented our analytical controller with reinforcement learning modules to learn how to regulate the walk engine parameters (planners and controllers) adaptively and generate residuals to adjust the robot’s target joint positions (residual physics). The effectiveness of the proposed frameworks was demonstrated and evaluated across a set of challenging simulation scenarios. The robot was able to generalize what it learned in one scenario, by displaying human-like locomotion skills in unforeseen circumstances, even in the presence of noise and external pushes.Os robôs humanoides são feitos para se parecerem com humanos, mas suas habilidades de locomoção estão longe das nossas em termos de agilidade e versatilidade. Quando os humanos caminham em terrenos complexos ou enfrentam distúrbios externos combinam diferentes estratégias, de forma inconsciente e eficiente, para recuperar a estabilidade. Esta tese aborda o problema de desenvolver um sistema robusto para andar de forma omnidirecional, capaz de gerar uma locomoção para robôs humanoides versátil e ágil em terrenos complexos. Projetámos e desenvolvemos motores de locomoção sem modelos e baseados em modelos. Formulámos os controladores usando diferentes abordagens, incluindo esquemas de controlo clássicos e ideais, e validámos o seu desempenho por meio de simulações e experiências reais. Estes frameworks têm estruturas hierárquicas compostas por várias camadas. Essas camadas são compostas por vários módulos que são conectados entre si para diminuir a complexidade e aumentar a flexibilidade dos frameworks propostos. Adicionalmente, o sistema pode ser implementado em diferentes plataformas de forma fácil. Acreditamos que o uso de aprendizagem automática sobre abordagens analíticas é a chave para abrir as portas para robôs humanoides saírem dos laboratórios. Propusemos um forte acoplamento entre controlo analítico e aprendizagem profunda por reforço. Expandimos o nosso controlador analítico com módulos de aprendizagem por reforço para aprender como regular os parâmetros do motor de caminhada (planeadores e controladores) de forma adaptativa e gerar resíduos para ajustar as posições das juntas alvo do robô (física residual). A eficácia das estruturas propostas foi demonstrada e avaliada em um conjunto de cenários de simulação desafiadores. O robô foi capaz de generalizar o que aprendeu em um cenário, exibindo habilidades de locomoção humanas em circunstâncias imprevistas, mesmo na presença de ruído e impulsos externos.Programa Doutoral em Informátic

    Dynamic Walking: Toward Agile and Efficient Bipedal Robots

    Get PDF
    Dynamic walking on bipedal robots has evolved from an idea in science fiction to a practical reality. This is due to continued progress in three key areas: a mathematical understanding of locomotion, the computational ability to encode this mathematics through optimization, and the hardware capable of realizing this understanding in practice. In this context, this review article outlines the end-to-end process of methods which have proven effective in the literature for achieving dynamic walking on bipedal robots. We begin by introducing mathematical models of locomotion, from reduced order models that capture essential walking behaviors to hybrid dynamical systems that encode the full order continuous dynamics along with discrete footstrike dynamics. These models form the basis for gait generation via (nonlinear) optimization problems. Finally, models and their generated gaits merge in the context of real-time control, wherein walking behaviors are translated to hardware. The concepts presented are illustrated throughout in simulation, and experimental instantiation on multiple walking platforms are highlighted to demonstrate the ability to realize dynamic walking on bipedal robots that is agile and efficient

    Learning Terrain Dynamics: A Gaussian Process Modeling and Optimal Control Adaptation Framework Applied to Robotic Jumping

    Get PDF
    The complex dynamics characterizing deformable terrain presents significant impediments toward the real-world viability of locomotive robotics, particularly for legged machines. We explore vertical, robotic jumping as a model task for legged locomotion on presumed-uncharacterized, nonrigid terrain. By integrating Gaussian process (GP)-based regression and evaluation to estimate ground reaction forces as a function of the state, a 1-D jumper acquires the capability to learn forcing profiles exerted by its environment in tandem with achieving its control objective. The GP-based dynamical model initially assumes a baseline rigid, noncompliant surface. As part of an iterative procedure, the optimizer employing this model generates an optimal control strategy to achieve a target jump height. Experiential data recovered from execution on the true surface model are applied to train the GP, in turn, providing the optimizer a more richly informed dynamical model of the environment. The iterative control-learning procedure was rigorously evaluated in experiment, over different surface types, whereby a robotic hopper was challenged to jump to several different target heights. Each task was achieved within ten attempts, over which the terrain's dynamics were learned. With each iteration, GP predictions of ground forcing became incrementally refined, rapidly matching experimental force measurements. The few-iteration convergence demonstrates a fundamental capacity to both estimate and adapt to unknown terrain dynamics in application-realistic time scales, all with control tools amenable to robotic legged locomotion
    • …
    corecore