356 research outputs found
Keep Rollin' - Whole-Body Motion Control and Planning for Wheeled Quadrupedal Robots
We show dynamic locomotion strategies for wheeled quadrupedal robots, which
combine the advantages of both walking and driving. The developed optimization
framework tightly integrates the additional degrees of freedom introduced by
the wheels. Our approach relies on a zero-moment point based motion
optimization which continuously updates reference trajectories. The reference
motions are tracked by a hierarchical whole-body controller which computes
optimal generalized accelerations and contact forces by solving a sequence of
prioritized tasks including the nonholonomic rolling constraints. Our approach
has been tested on ANYmal, a quadrupedal robot that is fully torque-controlled
including the non-steerable wheels attached to its legs. We conducted
experiments on flat and inclined terrains as well as over steps, whereby we
show that integrating the wheels into the motion control and planning framework
results in intuitive motion trajectories, which enable more robust and dynamic
locomotion compared to other wheeled-legged robots. Moreover, with a speed of 4
m/s and a reduction of the cost of transport by 83 % we prove the superiority
of wheeled-legged robots compared to their legged counterparts.Comment: IEEE Robotics and Automation Letter
Whole-Body MPC and Online Gait Sequence Generation for Wheeled-Legged Robots
Our paper proposes a model predictive controller as a single-task formulation
that simultaneously optimizes wheel and torso motions. This online joint
velocity and ground reaction force optimization integrates a kinodynamic model
of a wheeled quadrupedal robot. It defines the single rigid body dynamics along
with the robot's kinematics while treating the wheels as moving ground
contacts. With this approach, we can accurately capture the robot's rolling
constraint and dynamics, enabling automatic discovery of hybrid maneuvers
without needless motion heuristics. The formulation's generality through the
simultaneous optimization over the robot's whole-body variables allows for a
single set of parameters and makes online gait sequence adaptation possible.
Aperiodic gait sequences are automatically found through kinematic leg
utilities without the need for predefined contact and lift-off timings,
reducing the cost of transport by up to 85%. Our experiments demonstrate
dynamic motions on a quadrupedal robot with non-steerable wheels in challenging
indoor and outdoor environments. The paper's findings contribute to evaluating
a decomposed, i.e., sequential optimization of wheel and torso motion, and
single-task motion planner with a novel quantity, the prediction error, which
describes how well a receding horizon planner can predict the robot's future
state. To this end, we report an improvement of up to 71% using our proposed
single-task approach, making fast locomotion feasible and revealing
wheeled-legged robots' full potential.Comment: 8 pages, 6 figures, 1 table, 52 references, 9 equation
Towards Bipedal Behavior on a Quadrupedal Platform Using Optimal Control
This paper explores the applicability of a Linear Quadratic Regulator (LQR) controller design to the problem of bipedal stance on the Minitaur [1] quadrupedal robot. Restricted to the sagittal plane, this behavior exposes a three degree of freedom (DOF) double inverted pendulum with extensible length that can be projected onto the familiar underactuated revolute-revolute “Acrobot” model by assuming a locked prismatic DOF, and a pinned toe. While previous work has documented the successful use of local LQR control to stabilize a physical Acrobot, simulations reveal that a design very similar to those discussed in the past literature cannot achieve an empirically viable controller for our physical plant. Experiments with a series of increasingly close physical facsimiles leading to the actual Minitaur platform itself corroborate and underscore the physical Minitaur platform corroborate and underscore the implications of the simulation study. We conclude that local LQR-based linearized controller designs are too fragile to stabilize the physical Minitaur platform around its vertically erect equilibrium and end with a brief assessment of a variety of more sophisticated nonlinear control approaches whose pursuit is now in progress
Whole-Body MPC for a Dynamically Stable Mobile Manipulator
Autonomous mobile manipulation offers a dual advantage of mobility provided
by a mobile platform and dexterity afforded by the manipulator. In this paper,
we present a whole-body optimal control framework to jointly solve the problems
of manipulation, balancing and interaction as one optimization problem for an
inherently unstable robot. The optimization is performed using a Model
Predictive Control (MPC) approach; the optimal control problem is transcribed
at the end-effector space, treating the position and orientation tasks in the
MPC planner, and skillfully planning for end-effector contact forces. The
proposed formulation evaluates how the control decisions aimed at end-effector
tracking and environment interaction will affect the balance of the system in
the future. We showcase the advantages of the proposed MPC approach on the
example of a ball-balancing robot with a robotic manipulator and validate our
controller in hardware experiments for tasks such as end-effector pose tracking
and door opening
RL + Model-based Control: Using On-demand Optimal Control to Learn Versatile Legged Locomotion
This letter presents a versatile control method for dynamic and robust legged
locomotion that integrates model-based optimal control with reinforcement
learning (RL). Our approach involves training an RL policy to imitate reference
motions generated on-demand through solving a finite-horizon optimal control
problem. This integration enables the policy to leverage human expertise in
generating motions to imitate while also allowing it to generalize to more
complex scenarios that require a more complex dynamics model. Our method
successfully learns control policies capable of generating diverse quadrupedal
gait patterns and maintaining stability against unexpected external
perturbations in both simulation and hardware experiments. Furthermore, we
demonstrate the adaptability of our method to more complex locomotion tasks on
uneven terrain without the need for excessive reward shaping or hyperparameter
tuning.Comment: 8 pages. 8 figures. The supplementary video is available in
https://youtu.be/gXDP87yVq4
RLOC: Terrain-Aware Legged Locomotion using Reinforcement Learning and Optimal Control
We present a unified model-based and data-driven approach for quadrupedal
planning and control to achieve dynamic locomotion over uneven terrain. We
utilize on-board proprioceptive and exteroceptive feedback to map sensory
information and desired base velocity commands into footstep plans using a
reinforcement learning (RL) policy trained in simulation over a wide range of
procedurally generated terrains. When ran online, the system tracks the
generated footstep plans using a model-based controller. We evaluate the
robustness of our method over a wide variety of complex terrains. It exhibits
behaviors which prioritize stability over aggressive locomotion. Additionally,
we introduce two ancillary RL policies for corrective whole-body motion
tracking and recovery control. These policies account for changes in physical
parameters and external perturbations. We train and evaluate our framework on a
complex quadrupedal system, ANYmal version B, and demonstrate transferability
to a larger and heavier robot, ANYmal C, without requiring retraining.Comment: 19 pages, 15 figures, 6 tables, 1 algorithm, submitted to T-RO; under
revie
An adaptable approach to learn realistic legged locomotion without examples
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Learning controllers that reproduce legged locomotion in nature has been a long-time goal in robotics and computer graphics. While yielding promising results, recent approaches are not yet flexible enough to be applicable to legged systems of different morphologies. This is partly because they often rely on precise motion capture references or elaborate learning environments that ensure the naturality of the emergent locomotion gaits but prevent generalization. This work proposes a generic approach for ensuring realism in locomotion by guiding the learning process with the spring-loaded inverted pendulum model as a reference. Leveraging on the exploration capacities of Reinforcement Learning (RL), we learn a control policy that fills in the information gap between the template model and full-body dynamics required to maintain stable and periodic locomotion. The proposed approach can be applied to robots of different sizes and morphologies and adapted to any RL technique and control architecture. We present experimental results showing that even in a model-free setup and with a simple reactive control architecture, the learned policies can generate realistic and energy-efficient locomotion gaits for a bipedal and a quadrupedal robot. And most importantly, this is achieved without using motion capture, strong constraints in the dynamics or kinematics of the robot, nor prescribing limb coordination. We provide supplemental videos for qualitative analysis of the naturality of the learned gaits.Peer ReviewedPostprint (author's final draft
- …