176 research outputs found

    Whole-Body MPC and Online Gait Sequence Generation for Wheeled-Legged Robots

    Full text link
    Our paper proposes a model predictive controller as a single-task formulation that simultaneously optimizes wheel and torso motions. This online joint velocity and ground reaction force optimization integrates a kinodynamic model of a wheeled quadrupedal robot. It defines the single rigid body dynamics along with the robot's kinematics while treating the wheels as moving ground contacts. With this approach, we can accurately capture the robot's rolling constraint and dynamics, enabling automatic discovery of hybrid maneuvers without needless motion heuristics. The formulation's generality through the simultaneous optimization over the robot's whole-body variables allows for a single set of parameters and makes online gait sequence adaptation possible. Aperiodic gait sequences are automatically found through kinematic leg utilities without the need for predefined contact and lift-off timings, reducing the cost of transport by up to 85%. Our experiments demonstrate dynamic motions on a quadrupedal robot with non-steerable wheels in challenging indoor and outdoor environments. The paper's findings contribute to evaluating a decomposed, i.e., sequential optimization of wheel and torso motion, and single-task motion planner with a novel quantity, the prediction error, which describes how well a receding horizon planner can predict the robot's future state. To this end, we report an improvement of up to 71% using our proposed single-task approach, making fast locomotion feasible and revealing wheeled-legged robots' full potential.Comment: 8 pages, 6 figures, 1 table, 52 references, 9 equation

    RLOC: Terrain-Aware Legged Locomotion using Reinforcement Learning and Optimal Control

    Full text link
    We present a unified model-based and data-driven approach for quadrupedal planning and control to achieve dynamic locomotion over uneven terrain. We utilize on-board proprioceptive and exteroceptive feedback to map sensory information and desired base velocity commands into footstep plans using a reinforcement learning (RL) policy trained in simulation over a wide range of procedurally generated terrains. When ran online, the system tracks the generated footstep plans using a model-based controller. We evaluate the robustness of our method over a wide variety of complex terrains. It exhibits behaviors which prioritize stability over aggressive locomotion. Additionally, we introduce two ancillary RL policies for corrective whole-body motion tracking and recovery control. These policies account for changes in physical parameters and external perturbations. We train and evaluate our framework on a complex quadrupedal system, ANYmal version B, and demonstrate transferability to a larger and heavier robot, ANYmal C, without requiring retraining.Comment: 19 pages, 15 figures, 6 tables, 1 algorithm, submitted to T-RO; under revie

    Les Trois Mousquetaires Team Description

    Get PDF
    International audienceThis paper presents the French team composition and describes the research objectives for its participation in the 2009 RoboCup event. This will be the rst year the French team will be involved in the Standard Platform League with four NAO humanoid robots

    Plan-Guided Reinforcement Learning for Whole-Body Manipulation

    Full text link
    Synthesizing complex whole-body manipulation behaviors has fundamental challenges due to the rapidly growing combinatorics inherent to contact interaction planning. While model-based methods have shown promising results in solving long-horizon manipulation tasks, they often work under strict assumptions, such as known model parameters, oracular observation of the environment state, and simplified dynamics, resulting in plans that cannot easily transfer to hardware. Learning-based approaches, such as imitation learning (IL) and reinforcement learning (RL), have been shown to be robust when operating over in-distribution states; however, they need heavy human supervision. Specifically, model-free RL requires a tedious reward-shaping process. IL methods, on the other hand, rely on human demonstrations that involve advanced teleoperation methods. In this work, we propose a plan-guided reinforcement learning (PGRL) framework to combine the advantages of model-based planning and reinforcement learning. Our method requires minimal human supervision because it relies on plans generated by model-based planners to guide the exploration in RL. In exchange, RL derives a more robust policy thanks to domain randomization. We test this approach on a whole-body manipulation task on Punyo, an upper-body humanoid robot with compliant, air-filled arm coverings, to pivot and lift a large box. Our preliminary results indicate that the proposed methodology is promising to address challenges that remain difficult for either model- or learning-based strategies alone.Comment: 4 pages, 4 figure

    Reconfigurable and Agile Legged-Wheeled Robot Navigation in Cluttered Environments with Movable Obstacles

    Get PDF
    Legged and wheeled locomotion are two standard methods used by robots to perform navigation. Combining them to create a hybrid legged-wheeled locomotion results in increased speed, agility, and reconfigurability for the robot, allowing it to traverse a multitude of environments. The CENTAURO robot has these advantages, but they are accompanied by a higher-dimensional search space for formulating autonomous economical motion plans, especially in cluttered environments. In this article, we first review our previously presented legged-wheeled footprint reconfiguring global planner. We describe the two incremental prototypes, where the primary goal of the algorithms is to reduce the search space of possible footprints such that plans that expand the robot over the low-lying wide obstacles or narrow into passages can be computed with speed and efficiency. The planner also considers the cost of avoiding obstacles versus negotiating them by expanding over them. The second part of this article presents our new work on local obstacle pushing, which further increases the number of tight scenarios the planner can solve. The goal of the new local push-planner is to place any movable obstacle of unknown mass and inertial properties, obstructing the previously planned trajectory from our global planner, to a location devoid of obstruction. This is done while minimising the distance traveled by the robot, the distance the object is pushed, and its rotation caused by the push. Together, the local and global planners form a major part of the agile reconfigurable navigation suite for the legged-wheeled hybrid CENTAURO robot

    Migration from Teleoperation to Autonomy via Modular Sensor and Mobility Bricks

    Get PDF
    In this thesis, the teleoperated communications of a Remotec ANDROS robot have been reverse engineered. This research has used the information acquired through the reverse engineering process to enhance the teleoperation and add intelligence to the initially automated robot. The main contribution of this thesis is the implementation of the mobility brick paradigm, which enables autonomous operations, using the commercial teleoperated ANDROS platform. The brick paradigm is a generalized architecture for a modular approach to robotics. This architecture and the contribution of this thesis are a paradigm shift from the proprietary commercial models that exist today. The modular system of sensor bricks integrates the transformed mobility platform and defines it as a mobility brick. In the wall following application implemented in this work, the mobile robotic system acquires intelligence using the range sensor brick. This application illustrates a way to alleviate the burden on the human operator and delegate certain tasks to the robot. Wall following is one among several examples of giving a degree of autonomy to an essentially teleoperated robot through the Sensor Brick System. Indeed once the proprietary robot has been altered into a mobility brick; the possibilities for autonomy are numerous and vary with different sensor bricks. The autonomous system implemented is not a fixed-application robot but rather a non-specific autonomy capable platform. Meanwhile the native controller and the computer-interfaced teleoperation are still available when necessary. Rather than trading off by switching from teleoperation to autonomy, this system provides the flexibility to switch between the two at the operator’s command. The contributions of this thesis reside in the reverse engineering of the original robot, its upgrade to a computer-interfaced teleoperated system, the mobility brick paradigm and the addition of autonomy capabilities. The application of a robot autonomously following a wall is subsequently implemented, tested and analyzed in this work. The analysis provides the programmer with information on controlling the robot and launching the autonomous function. The results are conclusive and open up the possibilities for a variety of autonomous applications for mobility platforms using modular sensor bricks

    Multimodal Imitation using Self-learned Sensorimotor Representations

    No full text
    Although many tasks intrinsically involve multiple modalities, often only data from a single modality are used to improve complex robots acquisition of new skills. We present a method to equip robots with multimodal learning skills to achieve multimodal imitation on-the-fly on multiple concurrent task spaces, including vision, touch and proprioception, only using self-learned multimodal sensorimotor relations, without the need of solving inverse kinematic problems or explicit analytical models formulation. We evaluate the proposed method on a humanoid iCub robot learning to interact with a piano keyboard and imitating a human demonstration. Since no assumptions are made on the kinematic structure of the robot, the method can be also applied to different robotic platforms

    Nonverbal Communication During Human-Robot Object Handover. Improving Predictability of Humanoid Robots by Gaze and Gestures in Close Interaction

    Get PDF
    Meyer zu Borgsen S. Nonverbal Communication During Human-Robot Object Handover. Improving Predictability of Humanoid Robots by Gaze and Gestures in Close Interaction. Bielefeld: Universität Bielefeld; 2020.This doctoral thesis investigates the influence of nonverbal communication on human-robot object handover. Handing objects to one another is an everyday activity where two individuals cooperatively interact. Such close interactions incorporate a lot of nonverbal communication in order to create alignment in space and time. Understanding and transferring communication cues to robots becomes more and more important as e.g. service robots are expected to closely interact with humans in the near future. Their tasks often include delivering and taking objects. Thus, handover scenarios play an important role in human-robot interaction. A lot of work in this field of research focuses on speed, accuracy, and predictability of the robot’s movement during object handover. Still, robots need to be enabled to closely interact with naive users and not only experts. In this work I present how nonverbal communication can be implemented in robots to facilitate smooth handovers. I conducted a study on people with different levels of experience exchanging objects with a humanoid robot. It became clear that especially users with only little experience in regard to interaction with robots rely heavily on the communication cues they are used to on the basis of former interactions with humans. I added different gestures with the second arm, not directly involved in the transfer, to analyze the influence on synchronization, predictability, and human acceptance. Handing an object has a special movement trajectory itself which has not only the purpose of bringing the object or hand to the position of exchange but also of socially signalizing the intention to exchange an object. Another common type of nonverbal communication is gaze. It allows guessing the focus of attention of an interaction partner and thus helps to predict the next action. In order to evaluate handover interaction performance between human and robot, I applied the developed concepts to the humanoid robot Meka M1. By adding the humanoid robot head named Floka Head to the system, I created the Floka humanoid, to implement gaze strategies that aim to increase predictability and user comfort. This thesis contributes to the field of human-robot object handover by presenting study outcomes and concepts along with an implementation of improved software modules resulting in a fully functional object handing humanoid robot from perception and prediction capabilities to behaviors enhanced and improved by features of nonverbal communication
    • …
    corecore