1,205 research outputs found

    Real-Time Planning with Primitives for Dynamic Walking over Uneven Terrain

    Full text link
    We present an algorithm for receding-horizon motion planning using a finite family of motion primitives for underactuated dynamic walking over uneven terrain. The motion primitives are defined as virtual holonomic constraints, and the special structure of underactuated mechanical systems operating subject to virtual constraints is used to construct closed-form solutions and a special binary search tree that dramatically speed up motion planning. We propose a greedy depth-first search and discuss improvement using energy-based heuristics. The resulting algorithm can plan several footsteps ahead in a fraction of a second for both the compass-gait walker and a planar 7-Degree-of-freedom/five-link walker.Comment: Conference submissio

    Adaptive, fast walking in a biped robot under neuronal control and learning

    Get PDF
    Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori–motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (> 3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks

    Material Recognition CNNs and Hierarchical Planning for Biped Robot Locomotion on Slippery Terrain

    Full text link
    In this paper we tackle the problem of visually predicting surface friction for environments with diverse surfaces, and integrating this knowledge into biped robot locomotion planning. The problem is essential for autonomous robot locomotion since diverse surfaces with varying friction abound in the real world, from wood to ceramic tiles, grass or ice, which may cause difficulties or huge energy costs for robot locomotion if not considered. We propose to estimate friction and its uncertainty from visual estimation of material classes using convolutional neural networks, together with probability distribution functions of friction associated with each material. We then robustly integrate the friction predictions into a hierarchical (footstep and full-body) planning method using chance constraints, and optimize the same trajectory costs at both levels of the planning method for consistency. Our solution achieves fully autonomous perception and locomotion on slippery terrain, which considers not only friction and its uncertainty, but also collision, stability and trajectory cost. We show promising friction prediction results in real pictures of outdoor scenarios, and planning experiments on a real robot facing surfaces with different friction

    3LP: a linear 3D-walking model including torso and swing dynamics

    Get PDF
    In this paper, we present a new model of biped locomotion which is composed of three linear pendulums (one per leg and one for the whole upper body) to describe stance, swing and torso dynamics. In addition to double support, this model has different actuation possibilities in the swing hip and stance ankle which could be widely used to produce different walking gaits. Without the need for numerical time-integration, closed-form solutions help finding periodic gaits which could be simply scaled in certain dimensions to modulate the motion online. Thanks to linearity properties, the proposed model can provide a computationally fast platform for model predictive controllers to predict the future and consider meaningful inequality constraints to ensure feasibility of the motion. Such property is coming from describing dynamics with joint torques directly and therefore, reflecting hardware limitations more precisely, even in the very abstract high level template space. The proposed model produces human-like torque and ground reaction force profiles and thus, compared to point-mass models, it is more promising for precise control of humanoid robots. Despite being linear and lacking many other features of human walking like CoM excursion, knee flexion and ground clearance, we show that the proposed model can predict one of the main optimality trends in human walking, i.e. nonlinear speed-frequency relationship. In this paper, we mainly focus on describing the model and its capabilities, comparing it with human data and calculating optimal human gait variables. Setting up control problems and advanced biomechanical analysis still remain for future works.Comment: Journal paper under revie
    corecore