4,175 research outputs found

    LSH-RANSAC: An Incremental Scheme for Scalable Localization

    Get PDF
    This paper addresses the problem of feature- based robot localization in large-size environments. With recent progress in SLAM techniques, it has become crucial for a robot to estimate the self-position in real-time with respect to a large- size map that can be incrementally build by other mapper robots. Self-localization using large-size maps have been studied in litelature, but most of them assume that a complete map is given prior to the self-localization task. In this paper, we present a novel scheme for robot localization as well as map representation that can successfully work with large-size and incremental maps. This work combines our two previous works on incremental methods, iLSH and iRANSAC, for appearance- based and position-based localization

    Localization from semantic observations via the matrix permanent

    Get PDF
    Most approaches to robot localization rely on low-level geometric features such as points, lines, and planes. In this paper, we use object recognition to obtain semantic information from the robot’s sensors and consider the task of localizing the robot within a prior map of landmarks, which are annotated with semantic labels. As object recognition algorithms miss detections and produce false alarms, correct data association between the detections and the landmarks on the map is central to the semantic localization problem. Instead of the traditional vector-based representation, we propose a sensor model, which encodes the semantic observations via random finite sets and enables a unified treatment of missed detections, false alarms, and data association. Our second contribution is to reduce the problem of computing the likelihood of a set-valued observation to the problem of computing a matrix permanent. It is this crucial transformation that allows us to solve the semantic localization problem with a polynomial-time approximation to the set-based Bayes filter. Finally, we address the active semantic localization problem, in which the observer’s trajectory is planned in order to improve the accuracy and efficiency of the localization process. The performance of our approach is demonstrated in simulation and in real environments using deformable-part-model-based object detectors. Robust global localization from semantic observations is demonstrated for a mobile robot, for the Project Tango phone, and on the KITTI visual odometry dataset. Comparisons are made with the traditional lidar-based geometric Monte Carlo localization

    Find your Way by Observing the Sun and Other Semantic Cues

    Full text link
    In this paper we present a robust, efficient and affordable approach to self-localization which does not require neither GPS nor knowledge about the appearance of the world. Towards this goal, we utilize freely available cartographic maps and derive a probabilistic model that exploits semantic cues in the form of sun direction, presence of an intersection, road type, speed limit as well as the ego-car trajectory in order to produce very reliable localization results. Our experimental evaluation shows that our approach can localize much faster (in terms of driving time) with less computation and more robustly than competing approaches, which ignore semantic information

    Dual-sensor fusion for indoor user localisation

    Get PDF
    In this paper we address the automatic identification of in- door locations using a combination of WLAN and image sensing. Our motivation is the increasing prevalence of wear- able cameras, some of which can also capture WLAN data. We propose to use image-based and WLAN-based localisa- tion individually and then fuse the results to obtain better performance overall. We demonstrate the effectiveness of our fusion algorithm for localisation to within a 8.9m2 room on very challenging data both for WLAN and image-based algorithms. We envisage the potential usefulness of our ap- proach in a range of ambient assisted living applications

    Localization for mobile robots using panoramic vision, local features and particle filter

    Get PDF
    In this paper we present a vision-based approach to self-localization that uses a novel scheme to integrate feature-based matching of panoramic images with Monte Carlo localization. A specially modified version of Lowe’s SIFT algorithm is used to match features extracted from local interest points in the image, rather than using global features calculated from the whole image. Experiments conducted in a large, populated indoor environment (up to 5 persons visible) over a period of several months demonstrate the robustness of the approach, including kidnapping and occlusion of up to 90% of the robot’s field of view
    corecore