2,322 research outputs found

    An Underwater SLAM System using Sonar, Visual, Inertial, and Depth Sensor

    Full text link
    This paper presents a novel tightly-coupled keyframe-based Simultaneous Localization and Mapping (SLAM) system with loop-closing and relocalization capabilities targeted for the underwater domain. Our previous work, SVIn, augmented the state-of-the-art visual-inertial state estimation package OKVIS to accommodate acoustic data from sonar in a non-linear optimization-based framework. This paper addresses drift and loss of localization -- one of the main problems affecting other packages in underwater domain -- by providing the following main contributions: a robust initialization method to refine scale using depth measurements, a fast preprocessing step to enhance the image quality, and a real-time loop-closing and relocalization method using bag of words (BoW). An additional contribution is the addition of depth measurements from a pressure sensor to the tightly-coupled optimization formulation. Experimental results on datasets collected with a custom-made underwater sensor suite and an autonomous underwater vehicle from challenging underwater environments with poor visibility demonstrate performance never achieved before in terms of accuracy and robustness

    Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging.

    Full text link
    This thesis reports on the incorporation of surface information into a probabilistic simultaneous localization and mapping (SLAM) framework used on an autonomous underwater vehicle (AUV) designed for underwater inspection. AUVs operating in cluttered underwater environments, such as ship hulls or dams, are commonly equipped with Doppler-based sensors, which---in addition to navigation---provide a sparse representation of the environment in the form of a three-dimensional (3D) point cloud. The goal of this thesis is to develop perceptual algorithms that take full advantage of these sparse observations for correcting navigational drift and building a model of the environment. In particular, we focus on three objectives. First, we introduce a novel representation of this 3D point cloud as collections of planar features arranged in a factor graph. This factor graph representation probabalistically infers the spatial arrangement of each planar segment and can effectively model smooth surfaces (such as a ship hull). Second, we show how this technique can produce 3D models that serve as input to our pipeline that produces the first-ever 3D photomosaics using a two-dimensional (2D) imaging sonar. Finally, we propose a model-assisted bundle adjustment (BA) framework that allows for robust registration between surfaces observed from a Doppler sensor and visual features detected from optical images. Throughout this thesis, we show methods that produce 3D photomosaics using a combination of triangular meshes (derived from our SLAM framework or given a-priori), optical images, and sonar images. Overall, the contributions of this thesis greatly increase the accuracy, reliability, and utility of in-water ship hull inspection with AUVs despite the challenges they face in underwater environments. We provide results using the Hovering Autonomous Underwater Vehicle (HAUV) for autonomous ship hull inspection, which serves as the primary testbed for the algorithms presented in this thesis. The sensor payload of the HAUV consists primarily of: a Doppler velocity log (DVL) for underwater navigation and ranging, monocular and stereo cameras, and---for some applications---an imaging sonar.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120750/1/paulozog_1.pd

    Toward autonomous exploration in confined underwater environments

    Get PDF
    Author Posting. © The Author(s), 2015. This is the author's version of the work. It is posted here by permission of John Wiley & Sons for personal use, not for redistribution. The definitive version was published in Journal of Field Robotics 33 (2016): 994-1012, doi:10.1002/rob.21640.In this field note we detail the operations and discuss the results of an experiment conducted in the unstructured environment of an underwater cave complex, using an autonomous underwater vehicle (AUV). For this experiment the AUV was equipped with two acoustic sonar to simultaneously map the caves’ horizontal and vertical surfaces. Although the caves’ spatial complexity required AUV guidance by a diver, this field deployment successfully demonstrates a scan matching algorithm in a simultaneous localization and mapping (SLAM) framework that significantly reduces and bounds the localization error for fully autonomous navigation. These methods are generalizable for AUV exploration in confined underwater environments where surfacing or pre-deployment of localization equipment are not feasible and may provide a useful step toward AUV utilization as a response tool in confined underwater disaster areas.This research work was partially sponsored by the EU FP7-Projects: Tecniospring- Marie Curie (TECSPR13-1-0052), MORPH (FP7-ICT-2011-7-288704), Eurofleets2 (FP7-INF-2012-312762), and the National Science Foundation (OCE-0955674)

    Towards High-resolution Imaging from Underwater Vehicles

    Full text link
    Large area mapping at high resolution underwater continues to be constrained by sensor-level environmental constraints and the mismatch between available navigation and sensor accuracy. In this paper, advances are presented that exploit aspects of the sensing modality, and consistency and redundancy within local sensor measurements to build high-resolution optical and acoustic maps that are a consistent representation of the environment. This work is presented in the context of real-world data acquired using autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs) working in diverse applications including shallow water coral reef surveys with the Seabed AUV, a forensic survey of the RMS Titanic in the North Atlantic at a depth of 4100 m using the Hercules ROV, and a survey of the TAG hydrothermal vent area in the mid-Atlantic at a depth of 3600 m using the Jason II ROV. Specifically, the focus is on the related problems of structure from motion from underwater optical imagery assuming pose instrumented calibrated cameras. General wide baseline solutions are presented for these problems based on the extension of techniques from the simultaneous localization and mapping (SLAM), photogrammetric and the computer vision communities. It is also examined how such techniques can be extended for the very different sensing modality and scale associated with multi-beam bathymetric mapping. For both the optical and acoustic mapping cases it is also shown how the consistency in mapping can be used not only for better global mapping, but also to refine navigation estimates.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86051/1/hsingh-21.pd

    A Survey of Positioning Systems Using Visible LED Lights

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.As Global Positioning System (GPS) cannot provide satisfying performance in indoor environments, indoor positioning technology, which utilizes indoor wireless signals instead of GPS signals, has grown rapidly in recent years. Meanwhile, visible light communication (VLC) using light devices such as light emitting diodes (LEDs) has been deemed to be a promising candidate in the heterogeneous wireless networks that may collaborate with radio frequencies (RF) wireless networks. In particular, light-fidelity has a great potential for deployment in future indoor environments because of its high throughput and security advantages. This paper provides a comprehensive study of a novel positioning technology based on visible white LED lights, which has attracted much attention from both academia and industry. The essential characteristics and principles of this system are deeply discussed, and relevant positioning algorithms and designs are classified and elaborated. This paper undertakes a thorough investigation into current LED-based indoor positioning systems and compares their performance through many aspects, such as test environment, accuracy, and cost. It presents indoor hybrid positioning systems among VLC and other systems (e.g., inertial sensors and RF systems). We also review and classify outdoor VLC positioning applications for the first time. Finally, this paper surveys major advances as well as open issues, challenges, and future research directions in VLC positioning systems.Peer reviewe

    A Comprehensive Review on Autonomous Navigation

    Full text link
    The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed

    3D reconstruction and object recognition from 2D SONAR data

    Get PDF
    Accurate and meaningful representations of the environment are required for autonomy in underwater applications. Thanks to favourable propagation properties in water, acoustic sensors are commonly preferred to video cameras and lasers but do not provide direct 3D information. This thesis addresses the 3D reconstruction of underwater scenes from 2D imaging SONAR data as well as the recognition of objects of interest in the reconstructed scene. We present two 3D reconstruction methods and two model-based object recognition methods. We evaluate our algorithms on multiple scenarios including data gathered by an AUV. We show the ability to reconstruct underwater environments at centimetre-level accuracy using 2D SONARs of any aperture. We demonstrate the recognition of structures of interest on a medium-sized oil-field type environment providing accurate yet low memory footprint semantic world models. We conclude that accurate 3D semantic representations of partially-structured marine environments can be obtained from commonly embedded 2D SONARs, enabling online world modelling, relocalisation and model-based applications
    corecore