395 research outputs found

    Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging.

    Full text link
    This thesis reports on the incorporation of surface information into a probabilistic simultaneous localization and mapping (SLAM) framework used on an autonomous underwater vehicle (AUV) designed for underwater inspection. AUVs operating in cluttered underwater environments, such as ship hulls or dams, are commonly equipped with Doppler-based sensors, which---in addition to navigation---provide a sparse representation of the environment in the form of a three-dimensional (3D) point cloud. The goal of this thesis is to develop perceptual algorithms that take full advantage of these sparse observations for correcting navigational drift and building a model of the environment. In particular, we focus on three objectives. First, we introduce a novel representation of this 3D point cloud as collections of planar features arranged in a factor graph. This factor graph representation probabalistically infers the spatial arrangement of each planar segment and can effectively model smooth surfaces (such as a ship hull). Second, we show how this technique can produce 3D models that serve as input to our pipeline that produces the first-ever 3D photomosaics using a two-dimensional (2D) imaging sonar. Finally, we propose a model-assisted bundle adjustment (BA) framework that allows for robust registration between surfaces observed from a Doppler sensor and visual features detected from optical images. Throughout this thesis, we show methods that produce 3D photomosaics using a combination of triangular meshes (derived from our SLAM framework or given a-priori), optical images, and sonar images. Overall, the contributions of this thesis greatly increase the accuracy, reliability, and utility of in-water ship hull inspection with AUVs despite the challenges they face in underwater environments. We provide results using the Hovering Autonomous Underwater Vehicle (HAUV) for autonomous ship hull inspection, which serves as the primary testbed for the algorithms presented in this thesis. The sensor payload of the HAUV consists primarily of: a Doppler velocity log (DVL) for underwater navigation and ranging, monocular and stereo cameras, and---for some applications---an imaging sonar.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120750/1/paulozog_1.pd

    Large-area visually augmented navigation for autonomous underwater vehicles

    Get PDF
    Submitted to the Joint Program in Applied Ocean Science & Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2005This thesis describes a vision-based, large-area, simultaneous localization and mapping (SLAM) algorithm that respects the low-overlap imagery constraints typical of autonomous underwater vehicles (AUVs) while exploiting the inertial sensor information that is routinely available on such platforms. We adopt a systems-level approach exploiting the complementary aspects of inertial sensing and visual perception from a calibrated pose-instrumented platform. This systems-level strategy yields a robust solution to underwater imaging that overcomes many of the unique challenges of a marine environment (e.g., unstructured terrain, low-overlap imagery, moving light source). Our large-area SLAM algorithm recursively incorporates relative-pose constraints using a view-based representation that exploits exact sparsity in the Gaussian canonical form. This sparsity allows for efficient O(n) update complexity in the number of images composing the view-based map by utilizing recent multilevel relaxation techniques. We show that our algorithmic formulation is inherently sparse unlike other feature-based canonical SLAM algorithms, which impose sparseness via pruning approximations. In particular, we investigate the sparsification methodology employed by sparse extended information filters (SEIFs) and offer new insight as to why, and how, its approximation can lead to inconsistencies in the estimated state errors. Lastly, we present a novel algorithm for efficiently extracting consistent marginal covariances useful for data association from the information matrix. In summary, this thesis advances the current state-of-the-art in underwater visual navigation by demonstrating end-to-end automatic processing of the largest visually navigated dataset to date using data collected from a survey of the RMS Titanic (path length over 3 km and 3100 m2 of mapped area). This accomplishment embodies the summed contributions of this thesis to several current SLAM research issues including scalability, 6 degree of freedom motion, unstructured environments, and visual perception.This work was funded in part by the CenSSIS ERC of the National Science Foundation under grant EEC-9986821, in part by the Woods Hole Oceanographic Institution through a grant from the Penzance Foundation, and in part by a NDSEG Fellowship awarded through the Department of Defense

    Intervention AUVs: The Next Challenge

    Get PDF
    While commercially available AUVs are routinely used in survey missions, a new set of applications exist which clearly demand intervention capabilities. The maintenance of: permanent underwater observatories, submerged oil wells, cabled sensor networks, pipes and the deployment and recovery of benthic stations are a few of them. These tasks are addressed nowadays using manned submersibles or work-class ROVs, equipped with teleoperated arms under human supervision. Although researchers have recently opened the door to future I-AUVs, a long path is still necessary to achieve autonomous underwater interventions. This paper reviews the evolution timeline in autonomous underwater intervention systems. Milestone projects in the state of the art are reviewed, highlighting their principal contributions to the field. To the best of the authors knowledge, only three vehicles have demonstrated some autonomous intervention capabilities so far: ALIVE, SAUVIM and GIRONA 500, being the last one the lightest one. In this paper GIRONA 500 I-AUV is presented and its software architecture discussed. Recent results in different scenarios are reported: 1) Valve turning and connector plugging/unplugging while docked to a subsea panel, 2) Free floating valve turning using learning by demonstration, and 3) Multipurpose free-floating object recovery. The paper ends discussing the lessons learned so far

    Place Recognition and Localization for Multi-Modal Underwater Navigation with Vision and Acoustic Sensors

    Full text link
    Place recognition and localization are important topics in both robotic navigation and computer vision. They are a key prerequisite for simultaneous localization and mapping (SLAM) systems, and also important for long-term robot operation when registering maps generated at different times. The place recognition and relocalization problem is more challenging in the underwater environment because of four main factors: 1) changes in illumination; 2) long-term changes in the physical appearance of features in the aqueous environment attributable to biofouling and the natural growth, death, and movement of living organisms; 3) low density of reliable visual features; and 4) low visibility in a turbid environment. There is no one perceptual modality for underwater vehicles that can single-handedly address all the challenges of underwater place recognition and localization. This thesis proposes novel research in place recognition methods for underwater robotic navigation using both acoustic and optical imaging modalities. We develop robust place recognition algorithms using both optical cameras and a Forward-looking Sonar (FLS) for an active visual SLAM system that addresses the challenges mentioned above. We first design an optical image matching algorithm using high-level features to evaluate image similarity against dramatic appearance changes and low image feature density. A localization algorithm is then built upon this method combining both image similarity and measurements from other navigation sensors, which enables a vehicle to localize itself to maps temporally separated over the span of years. Next, we explore the potential of FLS in the place recognition task. The weak feature texture and high noise level in sonar images increase the difficulty in making correspondences among them. We learn descriptive image-level features using a convolutional neural network (CNN) with the data collected for our ship hull inspection mission. These features present outstanding performance in sonar image matching, which can be used for effective loop-closure proposal for SLAM as well as multi-session SLAM registration. Building upon this, we propose a pre-linearization approach to leverage this type of general high-dimensional abstracted feature in a real-time recursive Bayesian filtering framework, which results in the first real-time recursive localization framework using this modality. Finally, we propose a novel pose-graph SLAM algorithm leveraging FLS as the perceptual sensors providing constraints for drift correction. In this algorithm, we address practical problems that arise when using an FLS for SLAM, including feature sparsity, low reliability in data association and geometry estimation. More specifically, we propose a novel approach to pruning out less-informative sonar frames that improve system efficiency and reliability. We also employ local bundle adjustment to optimize the geometric constraints between sonar frames and use the mechanism to avoid degenerate motion patterns. All the proposed contributions are evaluated with real-data collected for ship hull inspection. The experimental results outperform existent benchmarks. The culmination of these contributions is a system capable of performing underwater SLAM with both optical and acoustic imagery gathered across years under challenging imaging conditions.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/140835/1/ljlijie_1.pd

    Exactly Sparse Delayed-State Filters for View-Based SLAM

    Get PDF
    This paper reports the novel insight that the simultaneous localization and mapping (SLAM) information matrix is exactly sparse in a delayed-state framework. Such a framework is used in view-based representations of the environment that rely upon scan-matching raw sensor data to obtain virtual observations of robot motion with respect to a place it has previously been. The exact sparseness of the delayed-state information matrix is in contrast to other recent feature-based SLAM information algorithms, such as sparse extended information filter or thin junction-tree filter, since these methods have to make approximations in order to force the feature-based SLAM information matrix to be sparse. The benefit of the exact sparsity of the delayed-state framework is that it allows one to take advantage of the information space parameterization without incurring any sparse approximation error. Therefore, it can produce equivalent results to the full-covariance solution. The approach is validated experimentally using monocular imagery for two datasets: a test-tank experiment with ground truth, and a remotely operated vehicle survey of the RMS Titanic.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86062/1/reustice-25.pd

    Computer vision applied to underwater robotics

    Get PDF

    Motion stereo at sea: Dense 3D reconstruction from image sequences monitoring conveyor systems on board fishing vessels

    Get PDF
    A system that reconstructs 3D models from a single camera monitoring fish transported on a conveyor system is investigated. Models are subsequently used for training a species classifier and for improving estimates of discarded biomass. It is demonstrated that a monocular camera, combined with a conveyor's linear motion produces a constrained form of multiview structure from motion, that allows the 3D scene to be reconstructed using a conventional stereo pipeline analogous to that of a binocular camera. Although motion stereo was proposed several decades ago, the present work is the first to compare the accuracy and precision of monocular and binocular stereo cameras monitoring conveyors and operationally deploy a system. The system exploits Convolutional Neural Networks (CNNs) for foreground segmentation and stereo matching. Results from a laboratory model show that when the camera is mounted 750 mm above the conveyor, a median accuracy of <5 mm can be achieved with an equivalent baseline of 62 mm. The precision is largely limited by error in determining the equivalent baseline (i.e. distance travelled by the conveyor belt). When ArUco markers are placed on the belt, the inter quartile range (IQR) of error in z (depth) near the optical centre was found to be ±4 mm
    • …
    corecore