3 research outputs found

    Terrain Segmentation and Roughness Estimation using RGB Data: Path Planning Application on the CENTAURO Robot

    Get PDF
    Robots operating in real world environments require a high-level perceptual understanding of the chief physical properties of the terrain they are traversing. In unknown environments, roughness is one such important terrain property that could play a key role in devising robot control/planning strategies. In this paper, we present a fast method for predicting pixel-wise labels of terrain (stone, sand, road/sidewalk, wood, grass, metal) and roughness estimation, using a single RGB-based deep neural network. Real world RGB images are used to experimentally validate the presented approach. Furthermore, we demonstrate an application of our proposed method on the centaur-like wheeled-legged robot CENTAURO, by integrating it with a navigation planner that is capable of re-configuring the leg joints to modify the robot footprint polygon for stability purposes or for safe traversal among obstacles

    Reconfigurable and Agile Legged-Wheeled Robot Navigation in Cluttered Environments with Movable Obstacles

    Get PDF
    Legged and wheeled locomotion are two standard methods used by robots to perform navigation. Combining them to create a hybrid legged-wheeled locomotion results in increased speed, agility, and reconfigurability for the robot, allowing it to traverse a multitude of environments. The CENTAURO robot has these advantages, but they are accompanied by a higher-dimensional search space for formulating autonomous economical motion plans, especially in cluttered environments. In this article, we first review our previously presented legged-wheeled footprint reconfiguring global planner. We describe the two incremental prototypes, where the primary goal of the algorithms is to reduce the search space of possible footprints such that plans that expand the robot over the low-lying wide obstacles or narrow into passages can be computed with speed and efficiency. The planner also considers the cost of avoiding obstacles versus negotiating them by expanding over them. The second part of this article presents our new work on local obstacle pushing, which further increases the number of tight scenarios the planner can solve. The goal of the new local push-planner is to place any movable obstacle of unknown mass and inertial properties, obstructing the previously planned trajectory from our global planner, to a location devoid of obstruction. This is done while minimising the distance traveled by the robot, the distance the object is pushed, and its rotation caused by the push. Together, the local and global planners form a major part of the agile reconfigurable navigation suite for the legged-wheeled hybrid CENTAURO robot

    A Study on Low-Drift State Estimation for Humanoid Locomotion, Using LiDAR and Kinematic-Inertial Data Fusion

    Get PDF
    Several humanoid robots will require to navigate in unsafe and unstructured environments, such as those after a disaster, for human assistance and support. To achieve this, humanoids require to construct in real-time, accurate maps of the environment and localize in it by estimating their base/pelvis state without any drift, using computationally efficient mapping and state estimation algorithms. While a multitude of Simultaneous Localization and Mapping (SLAM) algorithms exist, their localization relies on the existence of repeatable landmarks, which might not always be available in unstructured environments. Several studies also use stop-and-map procedures to map the environment before traversal, but this is not ideal for scenarios where the robot needs to be continuously moving to keep for instance the task completion time short. In this paper, we present a novel combination of the state-of-the-art odometry and mapping based on LiDAR data and state estimation based on the kinematics-inertial data of the humanoid. We present experimental evaluation of the introduced state estimation on the full-size humanoid robot WALK-MAN while performing locomotion tasks. Through this combination, we prove that it is possible to obtain low-error, high frequency estimates of the state of the robot, while moving and mapping the environment on the go
    corecore