443 research outputs found

    Orbit Characterization, Stabilization and Composition on 3D Underactuated Bipedal Walking via Hybrid Passive Linear Inverted Pendulum Model

    Get PDF
    A Hybrid passive Linear Inverted Pendulum (H-LIP) model is proposed for characterizing, stabilizing and composing periodic orbits for 3D underactuated bipedal walking. Specifically, Period-l (P1) and Period -2 (P2) orbits are geometrically characterized in the state space of the H-LIP. Stepping controllers are designed for global stabilization of the orbits. Valid ranges of the gains and their optimality are derived. The optimal stepping controller is used to create and stabilize the walking of bipedal robots. An actuated Spring-loaded Inverted Pendulum (aSLIP) model and the underactuated robot Cassie are used for illustration. Both the aSLIP walking with PI or P2 orbits and the Cassie walking with all 3D compositions of the PI and P2 orbits can be smoothly generated and stabilized from a stepping-in-place motion. This approach provides a perspective and a methodology towards continuous gait generation and stabilization for 3D underactuated walking robots

    Stability analysis and control for bipedal locomotion using energy methods

    Get PDF
    In this thesis, we investigate the stability of limit cycles of passive dynamic walking. The formation process of the limit cycles is approached from the view of energy interaction. We introduce for the first time the notion of the energy portrait analysis originated from the phase portrait. The energy plane is spanned by the total energy of the system and its derivative, and different energy trajectories represent the energy portrait in the plane. One of the advantages of this method is that the stability of the limit cycles can be easily shown in a 2D plane regardless of the dimension of the system. The energy portrait of passive dynamic walking reveals that the limit cycles are formed by the interaction between energy loss and energy gain during each cycle, and they are equal at equilibria in the energy plane. In addition, the energy portrait is exploited to examine the existence of semi-passive limit cycles generated using the energy supply only at the take-off phase. It is shown that the energy interaction at the ground contact compensates for the energy supply, which makes the total energy invariant yielding limit cycles. This result means that new limit cycles can be generated according to the energy supply without changing the ground slope, and level ground walking, whose energy gain at the contact phase is always zero, can be achieved without actuation during the swing phase. We design multiple switching controllers by virtue of this property to increase the basin of attraction. Multiple limit cycles are linearized using the Poincare map method, and the feedback gains are computed taking into account the robustness and actuator saturation. Once a trajectory diverges from a basin of attraction, we switch the current controller to one that includes the trajectory in its basin of attraction. Numerical simulations confirm that a set of limit cycles can be used to increase the basin of attraction further by switching the controllers one after another. To enhance our knowledge of the limit cycles, we performed sophisticated simulations and found all stable and unstable limit cycles from the various ground slopes not only for the symmetric legs but also for the unequal legs that cause gait asymmetries. As a result, we present a novel classification of the passive limit cycles showing six distinct groups that are consecutive and cyclical

    Bipedal Walking Energy Minimization by Reinforcement Learning with Evolving Policy Parameterization

    No full text
    We present a learning-based approach for minimizing the electric energy consumption during walking of a passively-compliant bipedal robot. The energy consumption is reduced by learning a varying-height center-of-mass trajectory which uses efficiently the robots passive compliance. To do this, we propose a reinforcement learning method which evolves the policy parameterization dynamically during the learning process and thus manages to find better policies faster than by using fixed parameterization. The method is first tested on a function approximation task, and then applied to the humanoid robot COMAN where it achieves significant energy reduction. © 2011 IEEE
    • …
    corecore