7,021 research outputs found

    Localization from semantic observations via the matrix permanent

    Get PDF
    Most approaches to robot localization rely on low-level geometric features such as points, lines, and planes. In this paper, we use object recognition to obtain semantic information from the robot’s sensors and consider the task of localizing the robot within a prior map of landmarks, which are annotated with semantic labels. As object recognition algorithms miss detections and produce false alarms, correct data association between the detections and the landmarks on the map is central to the semantic localization problem. Instead of the traditional vector-based representation, we propose a sensor model, which encodes the semantic observations via random finite sets and enables a unified treatment of missed detections, false alarms, and data association. Our second contribution is to reduce the problem of computing the likelihood of a set-valued observation to the problem of computing a matrix permanent. It is this crucial transformation that allows us to solve the semantic localization problem with a polynomial-time approximation to the set-based Bayes filter. Finally, we address the active semantic localization problem, in which the observer’s trajectory is planned in order to improve the accuracy and efficiency of the localization process. The performance of our approach is demonstrated in simulation and in real environments using deformable-part-model-based object detectors. Robust global localization from semantic observations is demonstrated for a mobile robot, for the Project Tango phone, and on the KITTI visual odometry dataset. Comparisons are made with the traditional lidar-based geometric Monte Carlo localization

    Probabilistic Robot Localization using Visual Landmarks

    Get PDF
    Effective robot navigation and route planning is impossible unless the position of the robot within its environment is known. Motion sensors that track the relative movement of a robot are inherently unreliable, so it is necessary to use cues from the external environment to periodically localize the robot. There are many methods for accomplishing this, most of which either probabilistically estimate the robot\u27s movement based on range sensors, or require having enough unique visual landmarks present to geometrically calculate the robot\u27s position at any time. In this project I examined the feasibility of using the probabilistic Monte Carlo localization algorithm to estimate a robot\u27s location based off of occasional visual landmark cues. Using visual landmarks has several advantages over using range sensor data in that landmark readings are less affected by unexpected objects and can be used for fast global localization. To test this system I designed a robot capable of navigating Olin-Rice by observing pieces of colored paper placed at regular intervals along the halls as an extension of my summer 2005 research on RUPART. The localization system could not localize the robot in many situations due to the sparse nature of the landmarks, but results suggest that with minor modifications the system could become a reliable localization scheme

    Galois lattice theory for probabilistic visual landmarks

    Get PDF
    This paper presents an original application of the Galois lattice theory, the visual landmark selection for topological localization of an autonomous mobile robot, equipped with a color camera. First, visual landmarks have to be selected in order to characterize a structural environment. Second, such landmarks have to be detected and updated for localization. These landmarks are combinations of attributes, and the selection process is done through a Galois lattice. This paper exposes the landmark selection process and focuses on probabilistic landmarks, which give the robot thorough information on how to locate itself. As a result, landmarks are no longer binary, but probabilistic. The full process of using such landmarks is described in this paper and validated through a robotics experiment

    Simple yet stable bearing-only navigation

    Get PDF
    This article describes a simple monocular navigation system for a mobile robot based on the map-and-replay technique. The presented method is robust and easy to implement and does not require sensor calibration or structured environment, and its computational complexity is independent of the environment size. The method can navigate a robot while sensing only one landmark at a time, making it more robust than other monocular approaches. The aforementioned properties of the method allow even low-cost robots to effectively act in large outdoor and indoor environments with natural landmarks only. The basic idea is to utilize a monocular vision to correct only the robot's heading, leaving distance measurements to the odometry. The heading correction itself can suppress the odometric error and prevent the overall position error from diverging. The influence of a map-based heading estimation and odometric errors on the overall position uncertainty is examined. A claim is stated that for closed polygonal trajectories, the position error of this type of navigation does not diverge. The claim is defended mathematically and experimentally. The method has been experimentally tested in a set of indoor and outdoor experiments, during which the average position errors have been lower than 0.3 m for paths more than 1 km long

    Interest point detectors for visual SLAM

    Get PDF
    In this paper we present several interest points detectors and we analyze their suitability when used as landmark extractors for vision-based simultaneous localization and mapping (vSLAM). For this purpose, we evaluate the detectors according to their repeatability under changes in viewpoint and scale. These are the desired requirements for visual landmarks. Several experiments were carried out using sequence of images captured with high precision. The sequences represent planar objects as well as 3D scenes

    Simultaneous localization and map-building using active vision

    No full text
    An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.Published versio
    corecore