6,735 research outputs found

    Learning for Ground Robot Navigation with Autonomous Data Collection

    Get PDF
    Robot navigation using vision is a classic example of a scene understanding problem. We describe a novel approach to estimating the traversability of an unknown environment based on modern object recognition methods. Traversability is an example of an affordance jointly determined by the environment and the physical characteristics of a robot vehicle, whose definition is clear in context. However, it is extremely difficult to estimate the traversability of a given terrain structure in general, or to find rules which work for a wide variety of terrain types. However, by learning to recognize similar terrain structures, it is possible to leverage a limited amount of interaction between the robot and its environment into global statements about the traversability of the scene. We describe a novel on-line learning algorithm that learns to recognize terrain features from images and aggregate the traversability information acquired by a navigating robot. An important property of our method, which is desirable for any learning-based approach to object recognition, is the ability to autonomously acquire arbitrary amounts of training data as needed without any human intervention. Tests of our algorithm on a real robot in complicated unknown natural environments suggest that it is both robust and efficient

    DeepNav: Learning to Navigate Large Cities

    Full text link
    We present DeepNav, a Convolutional Neural Network (CNN) based algorithm for navigating large cities using locally visible street-view images. The DeepNav agent learns to reach its destination quickly by making the correct navigation decisions at intersections. We collect a large-scale dataset of street-view images organized in a graph where nodes are connected by roads. This dataset contains 10 city graphs and more than 1 million street-view images. We propose 3 supervised learning approaches for the navigation task and show how A* search in the city graph can be used to generate supervision for the learning. Our annotation process is fully automated using publicly available mapping services and requires no human input. We evaluate the proposed DeepNav models on 4 held-out cities for navigating to 5 different types of destinations. Our algorithms outperform previous work that uses hand-crafted features and Support Vector Regression (SVR)[19].Comment: CVPR 2017 camera ready versio

    Efficient Autonomous Navigation for Planetary Rovers with Limited Resources

    Get PDF
    Rovers operating on Mars are in need of more and more autonomous features to ful ll their challenging mission requirements. However, the inherent constraints of space systems make the implementation of complex algorithms an expensive and difficult task. In this paper we propose a control architecture for autonomous navigation. Efficient implementations of autonomous features are built on top of the current ExoMars navigation method, enhancing the safety and traversing capabilities of the rover. These features allow the rover to detect and avoid hazards and perform long traverses by following a roughly safe path planned by operators on ground. The control architecture implementing the proposed navigation mode has been tested during a field test campaign on a planetary analogue terrain. The experiments evaluated the proposed approach, autonomously completing two long traverses while avoiding hazards. The approach only relies on the optical Localization Cameras stereobench, a sensor that is found in all rovers launched so far, and potentially allows for computationally inexpensive long-range autonomous navigation in terrains of medium difficulty
    • …
    corecore