106 research outputs found

    Simulation Study on Acquisition Process of Locomotion by Using an Infant Robot

    Get PDF
    Locomotion is one of the basic functions of a mobile robot. Using legs is one of the strategies for accomplishing locomotion. The strategy allows a robot to move over rough terrain. Therefore, a considerable amount of research has been conducted on motion control of legged locomotion robots. This chapter treats the motion generation of an infant robot, wit

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion

    Streamlined sim-to-real transfer for deep-reinforcement learning in robotics locomotion

    Get PDF
    Legged robots possess superior mobility compared to other machines, yet designing controllers for them can be challenging. Classic control methods require engineers to distill their knowledge into controllers, which is time-consuming and limiting when approaching dynamic tasks in unknown environments. Conversely, learning- based methods that gather knowledge from data can potentially unlock the versatility of legged systems. In this thesis, we propose a novel approach called CPG-Actor, which incor- porates feedback into a fully differentiable Central Pattern Generator (CPG) formulation using neural networks and Deep-Reinforcement Learning (RL). This approach achieves approximately twenty times better training performance compared to previous methods and provides insights into the impact of training on the distribution of parameters in both the CPGs and MLP feedback network. Adopting Deep-RL to design controllers comes at the expense of gathering extensive data, typically done in simulation to reduce time. However, controllers trained with data collected in simulation often lose performance when deployed in the real world, referred to as the sim-to-real gap. To address this, we propose a new method called Extended Random Force Injection (ERFI), which randomizes only two parameters to allow for sim-to-real transfer of locomotion controllers. ERFI demonstrated high robustness when varying masses of the base, or attaching a manipulator arm to the robot during testing, and achieved competitive performance comparable to standard randomization techniques. Furthermore, we propose a new method called Roll-Drop to enhance the robustness of Deep-RL policies to observation noise. Roll-Drop introduces dropout during rollout, achieving an 80% success rate when tested with up to 25% noise injected in the observations. Finally, we adopted model-free controllers to enable omni-directional bipedal lo- comotion on point feet with a quadruped robot without any hardware modification or external support. Despite the limitations posed by the quadruped’s hardware, the study considers this a perfect benchmark task to assess the shortcomings of sim- to-real techniques and unlock future avenues for the legged robotics community. Overall, this thesis demonstrates the potential of learning-based methods to design dynamic and robust controllers for legged robots while limiting the effort needed for sim-to-real transfer

    Reinforcement Learning of Stable Trajectory for Quasi-Passive Dynamic Walking of an Unstable Biped Robot

    Get PDF
    Biped walking is one of the major research targets in recent humanoid robotics, and many researchers are now interested in Passive Dynamic Walking (PDW) [McGeer (1990)] rather than that by the conventional Zero Moment Point (ZMP) criterion [Vukobratovic (1972)]. The ZMP criterion is usually used for planning a desired trajectory to be tracked by

    Basic Concepts of the Control and Learning Mechanism of Locomotion by the Central Pattern Generator

    Get PDF
    Basic locomotor patterns of living bodies, such as walking and swimming, are produced b

    Locomoção de humanoides robusta e versátil baseada em controlo analítico e física residual

    Get PDF
    Humanoid robots are made to resemble humans but their locomotion abilities are far from ours in terms of agility and versatility. When humans walk on complex terrains or face external disturbances, they combine a set of strategies, unconsciously and efficiently, to regain stability. This thesis tackles the problem of developing a robust omnidirectional walking framework, which is able to generate versatile and agile locomotion on complex terrains. We designed and developed model-based and model-free walk engines and formulated the controllers using different approaches including classical and optimal control schemes and validated their performance through simulations and experiments. These frameworks have hierarchical structures that are composed of several layers. These layers are composed of several modules that are connected together to fade the complexity and increase the flexibility of the proposed frameworks. Additionally, they can be easily and quickly deployed on different platforms. Besides, we believe that using machine learning on top of analytical approaches is a key to open doors for humanoid robots to step out of laboratories. We proposed a tight coupling between analytical control and deep reinforcement learning. We augmented our analytical controller with reinforcement learning modules to learn how to regulate the walk engine parameters (planners and controllers) adaptively and generate residuals to adjust the robot’s target joint positions (residual physics). The effectiveness of the proposed frameworks was demonstrated and evaluated across a set of challenging simulation scenarios. The robot was able to generalize what it learned in one scenario, by displaying human-like locomotion skills in unforeseen circumstances, even in the presence of noise and external pushes.Os robôs humanoides são feitos para se parecerem com humanos, mas suas habilidades de locomoção estão longe das nossas em termos de agilidade e versatilidade. Quando os humanos caminham em terrenos complexos ou enfrentam distúrbios externos combinam diferentes estratégias, de forma inconsciente e eficiente, para recuperar a estabilidade. Esta tese aborda o problema de desenvolver um sistema robusto para andar de forma omnidirecional, capaz de gerar uma locomoção para robôs humanoides versátil e ágil em terrenos complexos. Projetámos e desenvolvemos motores de locomoção sem modelos e baseados em modelos. Formulámos os controladores usando diferentes abordagens, incluindo esquemas de controlo clássicos e ideais, e validámos o seu desempenho por meio de simulações e experiências reais. Estes frameworks têm estruturas hierárquicas compostas por várias camadas. Essas camadas são compostas por vários módulos que são conectados entre si para diminuir a complexidade e aumentar a flexibilidade dos frameworks propostos. Adicionalmente, o sistema pode ser implementado em diferentes plataformas de forma fácil. Acreditamos que o uso de aprendizagem automática sobre abordagens analíticas é a chave para abrir as portas para robôs humanoides saírem dos laboratórios. Propusemos um forte acoplamento entre controlo analítico e aprendizagem profunda por reforço. Expandimos o nosso controlador analítico com módulos de aprendizagem por reforço para aprender como regular os parâmetros do motor de caminhada (planeadores e controladores) de forma adaptativa e gerar resíduos para ajustar as posições das juntas alvo do robô (física residual). A eficácia das estruturas propostas foi demonstrada e avaliada em um conjunto de cenários de simulação desafiadores. O robô foi capaz de generalizar o que aprendeu em um cenário, exibindo habilidades de locomoção humanas em circunstâncias imprevistas, mesmo na presença de ruído e impulsos externos.Programa Doutoral em Informátic

    Globally stable control of a dynamic bipedal walker using adaptive frequency oscillators

    Full text link
    We present a control method for a simple limit-cycle bipedal walker that uses adaptive frequency oscillators (AFOs) to generate stable gaits. Existence of stable limit cycles is demonstrated with an inverted-pendulum model. This model predicts a proportional relationship between hip torque amplitude and stride frequency. The closed-loop walking control incorporates adaptive Fourier analysis to generate a uniform oscillator phase. Gait solutions (fixed points) are predicted via linearization of the walker model, and employed as initial conditions to generate exact solutions via simulation. Global stability is determined via a recursive algorithm that generates the approximate basin of attraction of a fixed point. We also present an initial study on the implementation of AFO-based control on a bipedal walker with realistic mass distribution and articulated knee joint

    Rolling Optimization Method for Humanoid Robots

    Get PDF
    • …
    corecore