9,522 research outputs found

    Vision-based indoor localization via a visual slam approach

    Get PDF
    With an increasing interest in indoor location based services, vision-based indoor localization techniques have attracted many attentions from both academia and industry. Inspired by the development of simultaneous localization and mapping technique (SLAM), we present a visual SLAM-based approach to achieve a 6 degrees of freedom (DoF) pose in indoor environment. Firstly, the indoor scene is explored by a keyframe-based global mapping technique, which generates a database from a sequence of images covering the entire scene. After the exploration, a feature vocabulary tree is trained for accelerating feature matching in the image retrieval phase, and the spatial structures obtained from the keyframes are stored. Instead of querying by a single image, a short sequence of images in the query site are used to extract both features and their relative poses, which is a local visual SLAM procedure. The relative poses of query images provide a pose graph-based geometric constraint which is used to assess the validity of image retrieval results. The final positioning result is obtained by selecting the pose of the first correct corresponding image. © Authors 2019

    Increasing the Efficiency of 6-DoF Visual Localization Using Multi-Modal Sensory Data

    Full text link
    Localization is a key requirement for mobile robot autonomy and human-robot interaction. Vision-based localization is accurate and flexible, however, it incurs a high computational burden which limits its application on many resource-constrained platforms. In this paper, we address the problem of performing real-time localization in large-scale 3D point cloud maps of ever-growing size. While most systems using multi-modal information reduce localization time by employing side-channel information in a coarse manner (eg. WiFi for a rough prior position estimate), we propose to inter-weave the map with rich sensory data. This multi-modal approach achieves two key goals simultaneously. First, it enables us to harness additional sensory data to localise against a map covering a vast area in real-time; and secondly, it also allows us to roughly localise devices which are not equipped with a camera. The key to our approach is a localization policy based on a sequential Monte Carlo estimator. The localiser uses this policy to attempt point-matching only in nodes where it is likely to succeed, significantly increasing the efficiency of the localization process. The proposed multi-modal localization system is evaluated extensively in a large museum building. The results show that our multi-modal approach not only increases the localization accuracy but significantly reduces computational time.Comment: Presented at IEEE-RAS International Conference on Humanoid Robots (Humanoids) 201

    FLAT2D: Fast localization from approximate transformation into 2D

    Get PDF
    Many autonomous vehicles require precise localization into a prior map in order to support planning and to leverage semantic information within those maps (e.g. that the right lane is a turn-only lane.) A popular approach in automotive systems is to use infrared intensity maps of the ground surface to localize, making them susceptible to failures when the surface is obscured by snow or when the road is repainted. An emerging alternative is to localize based on the 3D structure around the vehicle; these methods are robust to these types of changes, but the maps are costly both in terms of storage and the computational cost of matching. In this paper, we propose a fast method for localizing based on 3D structure around the vehicle using a 2D representation. This representation retains many of the advantages of "full" matching in 3D, but comes with dramatically lower space and computational requirements. We also introduce a variation of Graph-SLAM tailored to support localization, allowing us to make use of graph-based error-recovery techniques in our localization estimate. Finally, we present real-world localization results for both an indoor mobile robotic platform and an autonomous golf cart, demonstrating that autonomous vehicles do not need full 3D matching to accurately localize in the environment

    Multiverse: Mobility pattern understanding improves localization accuracy

    Get PDF
    Department of Computer Science and EngineeringThis paper presents the design and implementation of Multiverse, a practical indoor localization system that can be deployed on top of already existing WiFi infrastructure. Although the existing WiFi-based positioning techniques achieve acceptable accuracy levels, we find that existing solutions are not practical for use in buildings due to a requirement of installing sophisticated access point (AP) hardware or special application on client devices to aid the system with extra information. Multiverse achieves sub-room precision estimates, while utilizing only received signal strength indication (RSSI) readings available to most of today's buildings through their installed APs, along with the assumption that most users would walk at the normal speed. This level of simplicity would promote ubiquity of indoor localization in the era of smartphones.ope

    Mapping, Localization and Path Planning for Image-based Navigation using Visual Features and Map

    Full text link
    Building on progress in feature representations for image retrieval, image-based localization has seen a surge of research interest. Image-based localization has the advantage of being inexpensive and efficient, often avoiding the use of 3D metric maps altogether. That said, the need to maintain a large number of reference images as an effective support of localization in a scene, nonetheless calls for them to be organized in a map structure of some kind. The problem of localization often arises as part of a navigation process. We are, therefore, interested in summarizing the reference images as a set of landmarks, which meet the requirements for image-based navigation. A contribution of this paper is to formulate such a set of requirements for the two sub-tasks involved: map construction and self-localization. These requirements are then exploited for compact map representation and accurate self-localization, using the framework of a network flow problem. During this process, we formulate the map construction and self-localization problems as convex quadratic and second-order cone programs, respectively. We evaluate our methods on publicly available indoor and outdoor datasets, where they outperform existing methods significantly.Comment: CVPR 2019, for implementation see https://github.com/janinethom

    A minimalistic approach to appearance-based visual SLAM

    Get PDF
    This paper presents a vision-based approach to SLAM in indoor / outdoor environments with minimalistic sensing and computational requirements. The approach is based on a graph representation of robot poses, using a relaxation algorithm to obtain a globally consistent map. Each link corresponds to a relative measurement of the spatial relation between the two nodes it connects. The links describe the likelihood distribution of the relative pose as a Gaussian distribution. To estimate the covariance matrix for links obtained from an omni-directional vision sensor, a novel method is introduced based on the relative similarity of neighbouring images. This new method does not require determining distances to image features using multiple view geometry, for example. Combined indoor and outdoor experiments demonstrate that the approach can handle qualitatively different environments (without modification of the parameters), that it can cope with violations of the “flat floor assumption” to some degree, and that it scales well with increasing size of the environment, producing topologically correct and geometrically accurate maps at low computational cost. Further experiments demonstrate that the approach is also suitable for combining multiple overlapping maps, e.g. for solving the multi-robot SLAM problem with unknown initial poses

    Appearance-based localization for mobile robots using digital zoom and visual compass

    Get PDF
    This paper describes a localization system for mobile robots moving in dynamic indoor environments, which uses probabilistic integration of visual appearance and odometry information. The approach is based on a novel image matching algorithm for appearance-based place recognition that integrates digital zooming, to extend the area of application, and a visual compass. Ambiguous information used for recognizing places is resolved with multiple hypothesis tracking and a selection procedure inspired by Markov localization. This enables the system to deal with perceptual aliasing or absence of reliable sensor data. It has been implemented on a robot operating in an office scenario and the robustness of the approach demonstrated experimentally
    • 

    corecore