3,240 research outputs found

    Adaptive, fast walking in a biped robot under neuronal control and learning

    Get PDF
    Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori–motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (> 3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks

    Review of Quadruped Robots for Dynamic Locomotion

    Full text link
    This review introduces quadruped robots: MITCheetah, HyQ, ANYmal, BigDog, and their mechanical structure, actuation, and control

    State Estimation for a Humanoid Robot

    Get PDF
    This paper introduces a framework for state estimation on a humanoid robot platform using only common proprioceptive sensors and knowledge of leg kinematics. The presented approach extends that detailed in [1] on a quadruped platform by incorporating the rotational constraints imposed by the humanoid's flat feet. As in previous work, the proposed Extended Kalman Filter (EKF) accommodates contact switching and makes no assumptions about gait or terrain, making it applicable on any humanoid platform for use in any task. The filter employs a sensor-based prediction model which uses inertial data from an IMU and corrects for integrated error using a kinematics-based measurement model which relies on joint encoders and a kinematic model to determine the relative position and orientation of the feet. A nonlinear observability analysis is performed on both the original and updated filters and it is concluded that the new filter significantly simplifies singular cases and improves the observability characteristics of the system. Results on simulated walking and squatting datasets demonstrate the performance gain of the flat-foot filter as well as confirm the results of the presented observability analysis.Comment: IROS 2014 Submission, IEEE/RSJ International Conference on Intelligent Robots and Systems (2014) 952-95
    corecore