243 research outputs found

    Legged Robots for Object Manipulation: A Review

    Get PDF
    Legged robots can have a unique role in manipulating objects in dynamic, human-centric, or otherwise inaccessible environments. Although most legged robotics research to date typically focuses on traversing these challenging environments, many legged platform demonstrations have also included "moving an object" as a way of doing tangible work. Legged robots can be designed to manipulate a particular type of object (e.g., a cardboard box, a soccer ball, or a larger piece of furniture), by themselves or collaboratively. The objective of this review is to collect and learn from these examples, to both organize the work done so far in the community and highlight interesting open avenues for future work. This review categorizes existing works into four main manipulation methods: object interactions without grasping, manipulation with walking legs, dedicated non-locomotive arms, and legged teams. Each method has different design and autonomy features, which are illustrated by available examples in the literature. Based on a few simplifying assumptions, we further provide quantitative comparisons for the range of possible relative sizes of the manipulated object with respect to the robot. Taken together, these examples suggest new directions for research in legged robot manipulation, such as multifunctional limbs, terrain modeling, or learning-based control, to support a number of new deployments in challenging indoor/outdoor scenarios in warehouses/construction sites, preserved natural areas, and especially for home robotics.Comment: Preprint of the paper submitted to Frontiers in Mechanical Engineerin

    Learning Vision-Guided Quadrupedal Locomotion End-to-End with Cross-Modal Transformers

    Full text link
    We propose to address quadrupedal locomotion tasks using Reinforcement Learning (RL) with a Transformer-based model that learns to combine proprioceptive information and high-dimensional depth sensor inputs. While learning-based locomotion has made great advances using RL, most methods still rely on domain randomization for training blind agents that generalize to challenging terrains. Our key insight is that proprioceptive states only offer contact measurements for immediate reaction, whereas an agent equipped with visual sensory observations can learn to proactively maneuver environments with obstacles and uneven terrain by anticipating changes in the environment many steps ahead. In this paper, we introduce LocoTransformer, an end-to-end RL method for quadrupedal locomotion that leverages a Transformer-based model for fusing proprioceptive states and visual observations. We evaluate our method in challenging simulated environments with different obstacles and uneven terrain. We show that our method obtains significant improvements over policies with only proprioceptive state inputs, and that Transformer-based models further improve generalization across environments. Our project page with videos is at https://RchalYang.github.io/LocoTransformer .Comment: Our project page with videos is at https://RchalYang.github.io/LocoTransforme

    Interacting with Intelligent Characters in AR

    Get PDF
    In this paper, we explore interacting with virtual characters in AR along real-world environments. Our vision is that virtual characters will be able to understand the real-world environment and interact in an intelligent and realistic manner with it. For example, a character can walk around uneven stairs and slopes, or be pushed away by collisions with real-world objects like a ball. We describe how to automatically animate a new character, and imbue it’s motion with adaption to environments and reactions to perturbations from the real world

    Motion Planning and Control of Dynamic Humanoid Locomotion

    Get PDF
    Inspired by human, humanoid robots has the potential to become a general-purpose platform that lives along with human. Due to the technological advances in many field, such as actuation, sensing, control and intelligence, it finally enables humanoid robots to possess human comparable capabilities. However, humanoid locomotion is still a challenging research field. The large number of degree of freedom structure makes the system difficult to coordinate online. The presence of various contact constraints and the hybrid nature of locomotion tasks make the planning a harder problem to solve. Template model anchoring approach has been adopted to bridge the gap between simple model behavior and the whole-body motion of humanoid robot. Control policies are first developed for simple template models like Linear Inverted Pendulum Model (LIPM) or Spring Loaded Inverted Pendulum(SLIP), the result controlled behaviors are then been mapped to the whole-body motion of humanoid robot through optimization-based task-space control strategies. Whole-body humanoid control framework has been verified on various contact situations such as unknown uneven terrain, multi-contact scenarios and moving platform and shows its generality and versatility. For walking motion, existing Model Predictive Control approach based on LIPM has been extended to enable the robot to walk without any reference foot placement anchoring. It is kind of discrete version of \u201cwalking without thinking\u201d. As a result, the robot could achieve versatile locomotion modes such as automatic foot placement with single reference velocity command, reactive stepping under large external disturbances, guided walking with small constant external pushing forces, robust walking on unknown uneven terrain, reactive stepping in place when blocked by external barrier. As an extension of this proposed framework, also to increase the push recovery capability of the humanoid robot, two new configurations have been proposed to enable the robot to perform cross-step motions. For more dynamic hopping and running motion, SLIP model has been chosen as the template model. Different from traditional model-based analytical approach, a data-driven approach has been proposed to encode the dynamics of the this model. A deep neural network is trained offline with a large amount of simulation data based on the SLIP model to learn its dynamics. The trained network is applied online to generate reference foot placements for the humanoid robot. Simulations have been performed to evaluate the effectiveness of the proposed approach in generating bio-inspired and robust running motions. The method proposed based on 2D SLIP model can be generalized to 3D SLIP model and the extension has been briefly mentioned at the end

    Keep Rollin' - Whole-Body Motion Control and Planning for Wheeled Quadrupedal Robots

    Full text link
    We show dynamic locomotion strategies for wheeled quadrupedal robots, which combine the advantages of both walking and driving. The developed optimization framework tightly integrates the additional degrees of freedom introduced by the wheels. Our approach relies on a zero-moment point based motion optimization which continuously updates reference trajectories. The reference motions are tracked by a hierarchical whole-body controller which computes optimal generalized accelerations and contact forces by solving a sequence of prioritized tasks including the nonholonomic rolling constraints. Our approach has been tested on ANYmal, a quadrupedal robot that is fully torque-controlled including the non-steerable wheels attached to its legs. We conducted experiments on flat and inclined terrains as well as over steps, whereby we show that integrating the wheels into the motion control and planning framework results in intuitive motion trajectories, which enable more robust and dynamic locomotion compared to other wheeled-legged robots. Moreover, with a speed of 4 m/s and a reduction of the cost of transport by 83 % we prove the superiority of wheeled-legged robots compared to their legged counterparts.Comment: IEEE Robotics and Automation Letter

    Receding-horizon motion planning of quadrupedal robot locomotion

    Get PDF
    Quadrupedal robots are designed to offer efficient and robust mobility on uneven terrain. This thesis investigates combining numerical optimization and machine learning methods to achieve interpretable kinodynamic planning of natural and agile locomotion. The proposed algorithm, called Receding-Horizon Experience-Controlled Adaptive Legged Locomotion (RHECALL), uses nonlinear programming (NLP) with learned initialization to produce long-horizon, high-fidelity, terrain-aware, whole-body trajectories. RHECALL has been implemented and validated on the ANYbotics ANYmal B and C quadrupeds on complex terrain. The proposed optimal control problem formulation uses the single-rigid-body dynamics (SRBD) model and adopts a direct collocation transcription method which enables the discovery of aperiodic contact sequences. To generate reliable trajectories, we propose fast-to-compute analytical costs that leverage the discretization and terrain-dependent kinematic constraints. To extend the formulation to receding-horizon planning, we propose a segmentation approach with asynchronous centre of mass (COM) and end-effector timings and a heuristic initialization scheme which reuses the previous solution. We integrate real-time 2.5D perception data for online foothold selection. Additionally, we demonstrate that a learned stability criterion can be incorporated into the planning framework. To accelerate the convergence of the NLP solver to locally optimal solutions, we propose data-driven initialization schemes trained using supervised and unsupervised behaviour cloning. We demonstrate the computational advantage of the schemes and the ability to leverage latent space to reconstruct dynamic segments of plans which are several seconds long. Finally, in order to apply RHECALL to quadrupeds with significant leg inertias, we derive the more accurate lump leg single-rigid-body dynamics (LL-SRBD) and centroidal dynamics (CD) models and their first-order partial derivatives. To facilitate intuitive usage of costs, constraints and initializations, we parameterize these models by Euclidean-space variables. We show the models have the ability to shape rotational inertia of the robot which offers potential to further improve agility

    Neural Volumetric Memory for Visual Locomotion Control

    Full text link
    Legged robots have the potential to expand the reach of autonomy beyond paved roads. In this work, we consider the difficult problem of locomotion on challenging terrains using a single forward-facing depth camera. Due to the partial observability of the problem, the robot has to rely on past observations to infer the terrain currently beneath it. To solve this problem, we follow the paradigm in computer vision that explicitly models the 3D geometry of the scene and propose Neural Volumetric Memory (NVM), a geometric memory architecture that explicitly accounts for the SE(3) equivariance of the 3D world. NVM aggregates feature volumes from multiple camera views by first bringing them back to the ego-centric frame of the robot. We test the learned visual-locomotion policy on a physical robot and show that our approach, which explicitly introduces geometric priors during training, offers superior performance than more na\"ive methods. We also include ablation studies and show that the representations stored in the neural volumetric memory capture sufficient geometric information to reconstruct the scene. Our project page with videos is https://rchalyang.github.io/NVM .Comment: CVPR 2023 Highlight. Our project page with videos is https://rchalyang.github.io/NV

    Laboratory on Legs: An Architechture for Adjustable Morphology with Legged Robots

    Get PDF
    For mobile robots, the essential units of actuation, computation, and sensing must be designed to fit within the body of the robot. Additional capabilities will largely depend upon a given activity, and should be easily reconfigurable to maximize the diversity of applications and experiments. To address this issue, we introduce a modular architecture originally developed and tested in the design and implementation of the X-RHex hexapod that allows the robot to operate as a mobile laboratory on legs. In the present paper we will introduce the specification, design and very earliest operational data of Canid, an actively driven compliant-spined quadruped whose completely different morphology and intended dynamical operating point are nevertheless built around exactly the same “Lab on Legs” actuation, computation, and sensing infrastructure. We will review as well, more briefly a second RHex variation, the XRL latform, built using the same components. For more information: Kod*La
    • …
    corecore