493 research outputs found

    Learning Ground Traversability from Simulations

    Full text link
    Mobile ground robots operating on unstructured terrain must predict which areas of the environment they are able to pass in order to plan feasible paths. We address traversability estimation as a heightmap classification problem: we build a convolutional neural network that, given an image representing the heightmap of a terrain patch, predicts whether the robot will be able to traverse such patch from left to right. The classifier is trained for a specific robot model (wheeled, tracked, legged, snake-like) using simulation data on procedurally generated training terrains; the trained classifier can be applied to unseen large heightmaps to yield oriented traversability maps, and then plan traversable paths. We extensively evaluate the approach in simulation on six real-world elevation datasets, and run a real-robot validation in one indoor and one outdoor environment.Comment: Webpage: http://romarcg.xyz/traversability_estimation

    A comprehensive survey of unmanned ground vehicle terrain traversability for unstructured environments and sensor technology insights

    Get PDF
    This article provides a detailed analysis of the assessment of unmanned ground vehicle terrain traversability. The analysis is categorized into terrain classification, terrain mapping, and cost-based traversability, with subcategories of appearance-based, geometry-based, and mixed-based methods. The article also explores the use of machine learning (ML), deep learning (DL) and reinforcement learning (RL) and other based end-to-end methods as crucial components for advanced terrain traversability analysis. The investigation indicates that a mixed approach, incorporating both exteroceptive and proprioceptive sensors, is more effective, optimized, and reliable for traversability analysis. Additionally, the article discusses the vehicle platforms and sensor technologies used in traversability analysis, making it a valuable resource for researchers in the field. Overall, this paper contributes significantly to the current understanding of traversability analysis in unstructured environments and provides insights for future sensor-based research on advanced traversability analysis

    AdVENTR: Autonomous Robot Navigation in Complex Outdoor Environments

    Full text link
    We present a novel system, AdVENTR for autonomous robot navigation in unstructured outdoor environments that consist of uneven and vegetated terrains. Our approach is general and can enable both wheeled and legged robots to handle outdoor terrain complexity including unevenness, surface properties like poor traction, granularity, obstacle stiffness, etc. We use data from sensors including RGB cameras, 3D Lidar, IMU, robot odometry, and pose information with efficient learning-based perception and planning algorithms that can execute on edge computing hardware. Our system uses a scene-aware switching method to perceive the environment for navigation at any time instant and dynamically switches between multiple perception algorithms. We test our system in a variety of sloped, rocky, muddy, and densely vegetated terrains and demonstrate its performance on Husky and Spot robots

    Deep Reinforcement Learning-based Multi-objective Path Planning on the Off-road Terrain Environment for Ground Vehicles

    Full text link
    Due to the energy-consumption efficiency between up-slope and down-slope is hugely different, a path with the shortest length on a complex off-road terrain environment (2.5D map) is not always the path with the least energy consumption. For any energy-sensitive vehicles, realizing a good trade-off between distance and energy consumption on 2.5D path planning is significantly meaningful. In this paper, a deep reinforcement learning-based 2.5D multi-objective path planning method (DMOP) is proposed. The DMOP can efficiently find the desired path with three steps: (1) Transform the high-resolution 2.5D map into a small-size map. (2) Use a trained deep Q network (DQN) to find the desired path on the small-size map. (3) Build the planned path to the original high-resolution map using a path enhanced method. In addition, the imitation learning method and reward shaping theory are applied to train the DQN. The reward function is constructed with the information of terrain, distance, border. Simulation shows that the proposed method can finish the multi-objective 2.5D path planning task. Also, simulation proves that the method has powerful reasoning capability that enables it to perform arbitrary untrained planning tasks on the same map

    Path Generation for Wheeled Robots Autonomous Navigation on Vegetated Terrain

    Full text link
    Wheeled robot navigation has been widely used in urban environments, but little research has been conducted on its navigation in wild vegetation. External sensors (LiDAR, camera etc.) are often used to construct point cloud map of the surrounding environment, however, the supporting rigid ground used for travelling cannot be detected due to the occlusion of vegetation. This often causes unsafe or not smooth path during planning process. To address the drawback, we propose the PE-RRT* algorithm, which effectively combines a novel support plane estimation method and sampling algorithm to generate real-time feasible and safe path in vegetation environments. In order to accurately estimate the support plane, we combine external perception and proprioception, and use Multivariate Gaussian Processe Regression (MV-GPR) to estimate the terrain at the sampling nodes. We build a physical experimental platform and conduct experiments in different outdoor environments. Experimental results show that our method has high safety, robustness and generalization

    Legged Robots for Object Manipulation: A Review

    Get PDF
    Legged robots can have a unique role in manipulating objects in dynamic, human-centric, or otherwise inaccessible environments. Although most legged robotics research to date typically focuses on traversing these challenging environments, many legged platform demonstrations have also included "moving an object" as a way of doing tangible work. Legged robots can be designed to manipulate a particular type of object (e.g., a cardboard box, a soccer ball, or a larger piece of furniture), by themselves or collaboratively. The objective of this review is to collect and learn from these examples, to both organize the work done so far in the community and highlight interesting open avenues for future work. This review categorizes existing works into four main manipulation methods: object interactions without grasping, manipulation with walking legs, dedicated non-locomotive arms, and legged teams. Each method has different design and autonomy features, which are illustrated by available examples in the literature. Based on a few simplifying assumptions, we further provide quantitative comparisons for the range of possible relative sizes of the manipulated object with respect to the robot. Taken together, these examples suggest new directions for research in legged robot manipulation, such as multifunctional limbs, terrain modeling, or learning-based control, to support a number of new deployments in challenging indoor/outdoor scenarios in warehouses/construction sites, preserved natural areas, and especially for home robotics.Comment: Preprint of the paper submitted to Frontiers in Mechanical Engineerin

    Learning to Model and Plan for Wheeled Mobility on Vertically Challenging Terrain

    Full text link
    Most autonomous navigation systems assume wheeled robots are rigid bodies and their 2D planar workspaces can be divided into free spaces and obstacles. However, recent wheeled mobility research, showing that wheeled platforms have the potential of moving over vertically challenging terrain (e.g., rocky outcroppings, rugged boulders, and fallen tree trunks), invalidate both assumptions. Navigating off-road vehicle chassis with long suspension travel and low tire pressure in places where the boundary between obstacles and free spaces is blurry requires precise 3D modeling of the interaction between the chassis and the terrain, which is complicated by suspension and tire deformation, varying tire-terrain friction, vehicle weight distribution and momentum, etc. In this paper, we present a learning approach to model wheeled mobility, i.e., in terms of vehicle-terrain forward dynamics, and plan feasible, stable, and efficient motion to drive over vertically challenging terrain without rolling over or getting stuck. We present physical experiments on two wheeled robots and show that planning using our learned model can achieve up to 60% improvement in navigation success rate and 46% reduction in unstable chassis roll and pitch angles.Comment: https://www.youtube.com/watch?v=VzpRoEZeyWk https://cs.gmu.edu/~xiao/Research/Verti-Wheelers

    Inclined Surface Locomotion Strategies for Spherical Tensegrity Robots

    Full text link
    This paper presents a new teleoperated spherical tensegrity robot capable of performing locomotion on steep inclined surfaces. With a novel control scheme centered around the simultaneous actuation of multiple cables, the robot demonstrates robust climbing on inclined surfaces in hardware experiments and speeds significantly faster than previous spherical tensegrity models. This robot is an improvement over other iterations in the TT-series and the first tensegrity to achieve reliable locomotion on inclined surfaces of up to 24\degree. We analyze locomotion in simulation and hardware under single and multi-cable actuation, and introduce two novel multi-cable actuation policies, suited for steep incline climbing and speed, respectively. We propose compelling justifications for the increased dynamic ability of the robot and motivate development of optimization algorithms able to take advantage of the robot's increased control authority.Comment: 6 pages, 11 figures, IROS 201
    • …
    corecore