6,287 research outputs found

    Walking Stabilization Using Step Timing and Location Adjustment on the Humanoid Robot, Atlas

    Full text link
    While humans are highly capable of recovering from external disturbances and uncertainties that result in large tracking errors, humanoid robots have yet to reliably mimic this level of robustness. Essential to this is the ability to combine traditional "ankle strategy" balancing with step timing and location adjustment techniques. In doing so, the robot is able to step quickly to the necessary location to continue walking. In this work, we present both a new swing speed up algorithm to adjust the step timing, allowing the robot to set the foot down more quickly to recover from errors in the direction of the current capture point dynamics, and a new algorithm to adjust the desired footstep, expanding the base of support to utilize the center of pressure (CoP)-based ankle strategy for balance. We then utilize the desired centroidal moment pivot (CMP) to calculate the momentum rate of change for our inverse-dynamics based whole-body controller. We present simulation and experimental results using this work, and discuss performance limitations and potential improvements

    A Reactive and Efficient Walking Pattern Generator for Robust Bipedal Locomotion

    Full text link
    Available possibilities to prevent a biped robot from falling down in the presence of severe disturbances are mainly Center of Pressure (CoP) modulation, step location and timing adjustment, and angular momentum regulation. In this paper, we aim at designing a walking pattern generator which employs an optimal combination of these tools to generate robust gaits. In this approach, first, the next step location and timing are decided consistent with the commanded walking velocity and based on the Divergent Component of Motion (DCM) measurement. This stage which is done by a very small-size Quadratic Program (QP) uses the Linear Inverted Pendulum Model (LIPM) dynamics to adapt the switching contact location and time. Then, consistent with the first stage, the LIPM with flywheel dynamics is used to regenerate the DCM and angular momentum trajectories at each control cycle. This is done by modulating the CoP and Centroidal Momentum Pivot (CMP) to realize a desired DCM at the end of current step. Simulation results show the merit of this reactive approach in generating robust and dynamically consistent walking patterns

    A Benchmarking of DCM Based Architectures for Position and Velocity Controlled Walking of Humanoid Robots

    Full text link
    This paper contributes towards the development and comparison of Divergent-Component-of-Motion (DCM) based control architectures for humanoid robot locomotion. More precisely, we present and compare several DCM based implementations of a three layer control architecture. From top to bottom, these three layers are here called: trajectory optimization, simplified model control, and whole-body QP control. All layers use the DCM concept to generate references for the layer below. For the simplified model control layer, we present and compare both instantaneous and Receding Horizon Control controllers. For the whole-body QP control layer, we present and compare controllers for position and velocity control robots. Experimental results are carried out on the one-meter tall iCub humanoid robot. We show which implementation of the above control architecture allows the robot to achieve a walking velocity of 0.41 meters per second.Comment: Submitted to Humanoids201

    Locomoção de humanoides robusta e versátil baseada em controlo analítico e física residual

    Get PDF
    Humanoid robots are made to resemble humans but their locomotion abilities are far from ours in terms of agility and versatility. When humans walk on complex terrains or face external disturbances, they combine a set of strategies, unconsciously and efficiently, to regain stability. This thesis tackles the problem of developing a robust omnidirectional walking framework, which is able to generate versatile and agile locomotion on complex terrains. We designed and developed model-based and model-free walk engines and formulated the controllers using different approaches including classical and optimal control schemes and validated their performance through simulations and experiments. These frameworks have hierarchical structures that are composed of several layers. These layers are composed of several modules that are connected together to fade the complexity and increase the flexibility of the proposed frameworks. Additionally, they can be easily and quickly deployed on different platforms. Besides, we believe that using machine learning on top of analytical approaches is a key to open doors for humanoid robots to step out of laboratories. We proposed a tight coupling between analytical control and deep reinforcement learning. We augmented our analytical controller with reinforcement learning modules to learn how to regulate the walk engine parameters (planners and controllers) adaptively and generate residuals to adjust the robot’s target joint positions (residual physics). The effectiveness of the proposed frameworks was demonstrated and evaluated across a set of challenging simulation scenarios. The robot was able to generalize what it learned in one scenario, by displaying human-like locomotion skills in unforeseen circumstances, even in the presence of noise and external pushes.Os robôs humanoides são feitos para se parecerem com humanos, mas suas habilidades de locomoção estão longe das nossas em termos de agilidade e versatilidade. Quando os humanos caminham em terrenos complexos ou enfrentam distúrbios externos combinam diferentes estratégias, de forma inconsciente e eficiente, para recuperar a estabilidade. Esta tese aborda o problema de desenvolver um sistema robusto para andar de forma omnidirecional, capaz de gerar uma locomoção para robôs humanoides versátil e ágil em terrenos complexos. Projetámos e desenvolvemos motores de locomoção sem modelos e baseados em modelos. Formulámos os controladores usando diferentes abordagens, incluindo esquemas de controlo clássicos e ideais, e validámos o seu desempenho por meio de simulações e experiências reais. Estes frameworks têm estruturas hierárquicas compostas por várias camadas. Essas camadas são compostas por vários módulos que são conectados entre si para diminuir a complexidade e aumentar a flexibilidade dos frameworks propostos. Adicionalmente, o sistema pode ser implementado em diferentes plataformas de forma fácil. Acreditamos que o uso de aprendizagem automática sobre abordagens analíticas é a chave para abrir as portas para robôs humanoides saírem dos laboratórios. Propusemos um forte acoplamento entre controlo analítico e aprendizagem profunda por reforço. Expandimos o nosso controlador analítico com módulos de aprendizagem por reforço para aprender como regular os parâmetros do motor de caminhada (planeadores e controladores) de forma adaptativa e gerar resíduos para ajustar as posições das juntas alvo do robô (física residual). A eficácia das estruturas propostas foi demonstrada e avaliada em um conjunto de cenários de simulação desafiadores. O robô foi capaz de generalizar o que aprendeu em um cenário, exibindo habilidades de locomoção humanas em circunstâncias imprevistas, mesmo na presença de ruído e impulsos externos.Programa Doutoral em Informátic
    • …
    corecore