3 research outputs found

    Hierarchical Experience-informed Navigation for Multi-modal Quadrupedal Rebar Grid Traversal

    Full text link
    This study focuses on a layered, experience-based, multi-modal contact planning framework for agile quadrupedal locomotion over a constrained rebar environment. To this end, our hierarchical planner incorporates locomotion-specific modules into the high-level contact sequence planner and solves kinodynamically-aware trajectory optimization as the low-level motion planner. Through quantitative analysis of the experience accumulation process and experimental validation of the kinodynamic feasibility of the generated locomotion trajectories, we demonstrate that the experience planning heuristic offers an effective way of providing candidate footholds for a legged contact planner. Additionally, we introduce a guiding torso path heuristic at the global planning level to enhance the navigation success rate in the presence of environmental obstacles. Our results indicate that the torso-path guided experience accumulation requires significantly fewer offline trials to successfully reach the goal compared to regular experience accumulation. Finally, our planning framework is validated in both dynamics simulations and real hardware implementations on a quadrupedal robot provided by Skymul Inc

    Deep Imitation Learning for Humanoid Loco-manipulation through Human Teleoperation

    Full text link
    We tackle the problem of developing humanoid loco-manipulation skills with deep imitation learning. The difficulty of collecting task demonstrations and training policies for humanoids with a high degree of freedom presents substantial challenges. We introduce TRILL, a data-efficient framework for training humanoid loco-manipulation policies from human demonstrations. In this framework, we collect human demonstration data through an intuitive Virtual Reality (VR) interface. We employ the whole-body control formulation to transform task-space commands by human operators into the robot's joint-torque actuation while stabilizing its dynamics. By employing high-level action abstractions tailored for humanoid loco-manipulation, our method can efficiently learn complex sensorimotor skills. We demonstrate the effectiveness of TRILL in simulation and on a real-world robot for performing various loco-manipulation tasks. Videos and additional materials can be found on the project page: https://ut-austin-rpl.github.io/TRILL.Comment: Submitted to Humanoids 202

    Understanding the fundamentals of bipedal locomotion in humans and robots

    Get PDF
    Walking is a robust and efficient method of moving around the world, which would greatly enhance the capabilities of humanoid robots, although they cannot match the performance of their biological counterparts. The highly nonlinear dynamics of locomotion create a vast state-action space, which makes model-based control difficult, yet biological humans are highly proficient and robust in their motion while operating under similar constraints. This disparity in performance naturally leads to the question: what can we learn about locomotion control by observing humans, and how can this be used to develop bio-inspired locomotion control in mechatronic humanoids? This thesis investigates bio-inspired locomotion control, but also explores the limitations of this approach and how we can use robotic platforms to move towards a better understanding of locomotion. We first present a methodology for measuring and analysing human locomotion behaviour, specifically disturbance recovery, and fit models to this complex behaviour to represent it in as simple as possible such that it can be easily translated into a simple controller for reactive motion. A minimum-jerk Model Predictive Control algorithm at the Centre of Mass (CoM) best captured human motion during multiple recovery strategies instead of using one controller for each strategy, which is common in this area. Capturing this simple CoM model of complex human behaviour shows that bio-inspiration can be an important tool for controller development, but behaviour varies between and even within individuals given similar initial conditions, which manifests as stochastic behaviour. Coupled with the ability to only measure expressed behaviours instead of direct control policies, this stochasticity presents a fundamental limit to using bio-inspiration for control purposes, as only indirect inferences can be made about a complex, stochastic system. To overcome these barriers, we investigate the use of mechatronic humanoid robots as a means to explore invariant aspects of the vast dynamic state-space of locomotion which are described by physical laws, and are therefore not subject to the stochastic behaviour of individual humans, that apply to both biological and mechatronic humanoid forms. We present a pipeline to explore the invariant energetics of humanoid robots during stepping for push recovery, where the most efficient stepping parameters are identified for a given initial CoM velocity and desired step length. Using this to explore the stepping state-space, our analysis finds a region of attraction between disturbance magnitude and optimal step length surrounded by a region of similarly efficient alternatives which corresponds to the stochastic behavior observed in humans during push recovery, which we would be unable to identify without reproducibility, direct access to internal measurements and known full body dynamics, which is not available in humans. We expand this paradigm further to investigate the invariant energetics of continuous walking using a full-body humanoid by exploring the state-space of step-length and step-timing to identify the most efficient sub-spaces of these parameters which describes the most efficient way to walk. Through analysis of this state-space, we provide evidence that the humanoid morphology exhibits a passive tendency towards energy-optimal motion and its dynamics follow a region of attraction towards Cost of Transport-optimal motion. Overall, these findings demonstrate the utility of robotics as a tool with which to explore certain aspects of legged locomotion and the results gained from our methodology suggest that humans do not need to explore a vast state-action space to learn to walk, they need only internalise simple heuristics for the natural dynamics of stepping that are easy to learn and can produce rapid, reactive and efficient stepping without costly decision-making processes
    corecore