59 research outputs found

    Localization and Mapping from Shore Contours and Depth

    Get PDF
    This work examines the problem of solving SLAM in aquatic environments using an unmanned surface vessel under conditions that restrict global knowledge of the robot's pose. These conditions refer specifically to the absence of a global positioning system to estimate position, a poor vehicle motion model, and absence of magnetic field to estimate absolute heading. These conditions are present in terrestrial environments where GPS satellite reception is occluded by surrounding structures and magnetic inference affects compass measurements. Similar conditions are anticipated in extra-terrestrial environments such as on Titan which lacks the infrastructure necessary for traditional positioning sensors and the unstable magnetic core renders compasses useless. This work develops a solution to the SLAM problem that utilizes shore features coupled with information about the depth of the water column. The approach is validated experimentally using an autonomous surface vehicle utilizing omnidirectional video and SONAR, results are compared to GPS ground truth

    A modular hybrid SLAM for the 3D mapping of large scale environments

    Get PDF
    Underground mining environments pose many unique challenges to the task of creating extensive, survey quality 3D maps. The extreme characteristics of such environments require a modular mapping solution which has no dependency on Global Positioning Systems (GPS), physical odometry, a priori information or motion model simplification. These restrictions rule out many existing 3D mapping approaches. This work examines a hybrid approach to mapping, fusing omnidirectional vision and 3D range data to produce an automatically registered, accurate and dense 3D map. A series of discrete 3D laser scans are registered through a combination of vision based bearing-only localization and scan matching with the Iterative Closest Point (ICP) algorithm. Depth information provided by the laser scans is used to correctly scale the bearing-only feature map, which in turn supplies an initial pose estimate for a registration algorithm to build the 3D map and correct localization drift. The resulting extensive maps require no external instrumentation or a priori information. Preliminary testing demonstrated the ability of the hybrid system to produce a highly accurate 3D map of an extensive indoor space

    A comparative evaluation of interest point detectors and local descriptors for visual SLAM

    Get PDF
    Abstract In this paper we compare the behavior of different interest points detectors and descriptors under the conditions needed to be used as landmarks in vision-based simultaneous localization and mapping (SLAM). We evaluate the repeatability of the detectors, as well as the invariance and distinctiveness of the descriptors, under different perceptual conditions using sequences of images representing planar objects as well as 3D scenes. We believe that this information will be useful when selecting an appropriat

    Vision based estimation, localization, and mapping for autonomous vehicles

    Get PDF
    In this dissertation, we focus on developing simultaneous localization and mapping (SLAM) algorithms with a robot-centric estimation framework primarily using monocular vision sensors. A primary contribution of this work is to use a robot-centric mapping framework concurrently with a world-centric localization method. We exploit the differential equation of motion of the normalized pixel coordinates of each point feature in the robot body frame. Another contribution of our work is to exploit a multiple-view geometry formulation with initial and current view projection of point features. We extract the features from objects surrounding the river and their reflections. The correspondences of the features are used along with the attitude and altitude information of the robot. We demonstrate that the observability of the estimation system is improved by applying our robot-centric mapping framework and multiple-view measurements. Using the robot-centric mapping framework and multiple-view measurements including reflection of features, we present a vision based localization and mapping algorithm that we developed for an unmanned aerial vehicle (UAV) flying in a riverine environment. Our algorithm estimates the 3D positions of point features along a river and the pose of the UAV. Our UAV is equipped with a lightweight monocular camera, an inertial measurement unit (IMU), a magnetometer, an altimeter, and an onboard computer. To our knowledge, we report the first result that exploits the reflections of features in a riverine environment for localization and mapping. We also present an omnidirectional vision based localization and mapping system for a lawn mowing robot. Our algorithm can detect whether the robotic mower is contained in a permitted area. Our robotic mower is modified with an omnidirectional camera, an IMU, a magnetometer, and a vehicle speed sensor. Here, we also exploit the robot-centric mapping framework. The estimator in our system generates a 3D point based map with landmarks. Concurrently, the estimator defines a boundary of the mowing area by using the estimated trajectory of the mower. The estimated boundary and the landmark map are provided for the estimation of the mowing location and for the containment detection. First, we derive a nonlinear observer with contraction analysis and pseudo-measurements of the depth of each landmark to prevent the map estimator from diverging. Of particular interest for this work is ensuring that the estimator for localization and mapping will not fail due to the nonlinearity of the system model. For batch estimation, we design a hybrid extended Kalman smoother for our localization and robot-centric mapping model. Finally, we present a single camera based SLAM algorithm using a convex optimization based nonlinear estimator. We validate the effectiveness of our algorithms through numerical simulations and outdoor experiments

    Interest point detectors for visual SLAM

    Get PDF
    In this paper we present several interest points detectors and we analyze their suitability when used as landmark extractors for vision-based simultaneous localization and mapping (vSLAM). For this purpose, we evaluate the detectors according to their repeatability under changes in viewpoint and scale. These are the desired requirements for visual landmarks. Several experiments were carried out using sequence of images captured with high precision. The sequences represent planar objects as well as 3D scenes

    MonoSLAM: Real-time single camera SLAM

    No full text
    Published versio

    Localización y mapeado simultáneos en robótica mediante visión omnidireccional

    Get PDF
    Algoritmos de localización y SLAM para cámaras omnidireccionales que utilizan como balizas las luces del entorno, capaces de operar en tiempo real y bajo oclusiones frecuentes y severas

    Real-Time Multi-Fisheye Camera Self-Localization and Egomotion Estimation in Complex Indoor Environments

    Get PDF
    In this work a real-time capable multi-fisheye camera self-localization and egomotion estimation framework is developed. The thesis covers all aspects ranging from omnidirectional camera calibration to the development of a complete multi-fisheye camera SLAM system based on a generic multi-camera bundle adjustment method

    A multisensor SLAM for dense maps of large scale environments under poor lighting conditions

    Get PDF
    This thesis describes the development and implementation of a multisensor large scale autonomous mapping system for surveying tasks in underground mines. The hazardous nature of the underground mining industry has resulted in a push towards autonomous solutions to the most dangerous operations, including surveying tasks. Many existing autonomous mapping techniques rely on approaches to the Simultaneous Localization and Mapping (SLAM) problem which are not suited to the extreme characteristics of active underground mining environments. Our proposed multisensor system has been designed from the outset to address the unique challenges associated with underground SLAM. The robustness, self-containment and portability of the system maximize the potential applications.The multisensor mapping solution proposed as a result of this work is based on a fusion of omnidirectional bearing-only vision-based localization and 3D laser point cloud registration. By combining these two SLAM techniques it is possible to achieve some of the advantages of both approaches – the real-time attributes of vision-based SLAM and the dense, high precision maps obtained through 3D lasers. The result is a viable autonomous mapping solution suitable for application in challenging underground mining environments.A further improvement to the robustness of the proposed multisensor SLAM system is a consequence of incorporating colour information into vision-based localization. Underground mining environments are often dominated by dynamic sources of illumination which can cause inconsistent feature motion during localization. Colour information is utilized to identify and remove features resulting from illumination artefacts and to improve the monochrome based feature matching between frames.Finally, the proposed multisensor mapping system is implemented and evaluated in both above ground and underground scenarios. The resulting large scale maps contained a maximum offset error of ±30mm for mapping tasks with lengths over 100m
    corecore