22,691 research outputs found

    Pose-based slam with probabilistic scan matching algorithm using a mechanical scanned imaging sonar

    Get PDF
    This paper proposes a pose-based algorithm to solve the full SLAM problem for an Autonomous Underwater Vehicle (AUV), navigating in an unknown and possibly unstructured environment. The technique incorporate probabilistic scan matching with range scans gathered from a Mechanical Scanned Imaging Sonar (MSIS) and the robot dead-reckoning displacements estimated from a Doppler Velocity Log (DVL) and a Motion Reference Unit (MRU). The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. The algorithm has been tested on an AUV guided along a 600m path within a marina environment, showing the viability of the proposed approach.Peer Reviewe

    Learning for Ground Robot Navigation with Autonomous Data Collection

    Get PDF
    Robot navigation using vision is a classic example of a scene understanding problem. We describe a novel approach to estimating the traversability of an unknown environment based on modern object recognition methods. Traversability is an example of an affordance jointly determined by the environment and the physical characteristics of a robot vehicle, whose definition is clear in context. However, it is extremely difficult to estimate the traversability of a given terrain structure in general, or to find rules which work for a wide variety of terrain types. However, by learning to recognize similar terrain structures, it is possible to leverage a limited amount of interaction between the robot and its environment into global statements about the traversability of the scene. We describe a novel on-line learning algorithm that learns to recognize terrain features from images and aggregate the traversability information acquired by a navigating robot. An important property of our method, which is desirable for any learning-based approach to object recognition, is the ability to autonomously acquire arbitrary amounts of training data as needed without any human intervention. Tests of our algorithm on a real robot in complicated unknown natural environments suggest that it is both robust and efficient

    DeepNav: Learning to Navigate Large Cities

    Full text link
    We present DeepNav, a Convolutional Neural Network (CNN) based algorithm for navigating large cities using locally visible street-view images. The DeepNav agent learns to reach its destination quickly by making the correct navigation decisions at intersections. We collect a large-scale dataset of street-view images organized in a graph where nodes are connected by roads. This dataset contains 10 city graphs and more than 1 million street-view images. We propose 3 supervised learning approaches for the navigation task and show how A* search in the city graph can be used to generate supervision for the learning. Our annotation process is fully automated using publicly available mapping services and requires no human input. We evaluate the proposed DeepNav models on 4 held-out cities for navigating to 5 different types of destinations. Our algorithms outperform previous work that uses hand-crafted features and Support Vector Regression (SVR)[19].Comment: CVPR 2017 camera ready versio
    • …
    corecore