99 research outputs found

    Information-Preserved Blending Method for Forward-Looking Sonar Mosaicing in Non-Ideal System Configuration

    Full text link
    Forward-Looking Sonar (FLS) has started to gain attention in the field of near-bottom close-range underwater inspection because of its high resolution and high framerate features. Although Automatic Target Recognition (ATR) algorithms have been applied tentatively for object-searching tasks, human supervision is still indispensable, especially when involving critical areas. A clear FLS mosaic containing all suspicious information is in demand to help experts deal with tremendous perception data. However, previous work only considered that FLS is working in an ideal system configuration, which assumes an appropriate sonar imaging setup and the availability of accurate positioning data. Without those promises, the intra-frame and inter-frame artifacts will appear and degrade the quality of the final mosaic by making the information of interest invisible. In this paper, we propose a novel blending method for FLS mosaicing which can preserve interested information. A Long-Short Time Sliding Window (LST-SW) is designed to rectify the local statistics of raw sonar images. The statistics are then utilized to construct a Global Variance Map (GVM). The GVM helps to emphasize the useful information contained in images in the blending phase by classifying the informative and featureless pixels, thereby enhancing the quality of final mosaic. The method is verified using data collected in the real environment. The results show that our method can preserve more details in FLS mosaics for human inspection purposes in practice

    Imaging sonar simulator for assessment of image registration techniques

    Get PDF

    Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging.

    Full text link
    This thesis reports on the incorporation of surface information into a probabilistic simultaneous localization and mapping (SLAM) framework used on an autonomous underwater vehicle (AUV) designed for underwater inspection. AUVs operating in cluttered underwater environments, such as ship hulls or dams, are commonly equipped with Doppler-based sensors, which---in addition to navigation---provide a sparse representation of the environment in the form of a three-dimensional (3D) point cloud. The goal of this thesis is to develop perceptual algorithms that take full advantage of these sparse observations for correcting navigational drift and building a model of the environment. In particular, we focus on three objectives. First, we introduce a novel representation of this 3D point cloud as collections of planar features arranged in a factor graph. This factor graph representation probabalistically infers the spatial arrangement of each planar segment and can effectively model smooth surfaces (such as a ship hull). Second, we show how this technique can produce 3D models that serve as input to our pipeline that produces the first-ever 3D photomosaics using a two-dimensional (2D) imaging sonar. Finally, we propose a model-assisted bundle adjustment (BA) framework that allows for robust registration between surfaces observed from a Doppler sensor and visual features detected from optical images. Throughout this thesis, we show methods that produce 3D photomosaics using a combination of triangular meshes (derived from our SLAM framework or given a-priori), optical images, and sonar images. Overall, the contributions of this thesis greatly increase the accuracy, reliability, and utility of in-water ship hull inspection with AUVs despite the challenges they face in underwater environments. We provide results using the Hovering Autonomous Underwater Vehicle (HAUV) for autonomous ship hull inspection, which serves as the primary testbed for the algorithms presented in this thesis. The sensor payload of the HAUV consists primarily of: a Doppler velocity log (DVL) for underwater navigation and ranging, monocular and stereo cameras, and---for some applications---an imaging sonar.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120750/1/paulozog_1.pd

    3D reconstruction and motion estimation using forward looking sonar

    Get PDF
    Autonomous Underwater Vehicles (AUVs) are increasingly used in different domains including archaeology, oil and gas industry, coral reef monitoring, harbour’s security, and mine countermeasure missions. As electromagnetic signals do not penetrate underwater environment, GPS signals cannot be used for AUV navigation, and optical cameras have very short range underwater which limits their use in most underwater environments. Motion estimation for AUVs is a critical requirement for successful vehicle recovery and meaningful data collection. Classical inertial sensors, usually used for AUV motion estimation, suffer from large drift error. On the other hand, accurate inertial sensors are very expensive which limits their deployment to costly AUVs. Furthermore, acoustic positioning systems (APS) used for AUV navigation require costly installation and calibration. Moreover, they have poor performance in terms of the inferred resolution. Underwater 3D imaging is another challenge in AUV industry as 3D information is increasingly demanded to accomplish different AUV missions. Different systems have been proposed for underwater 3D imaging, such as planar-array sonar and T-configured 3D sonar. While the former features good resolution in general, it is very expensive and requires huge computational power, the later is cheaper implementation but requires long time for full 3D scan even in short ranges. In this thesis, we aim to tackle AUV motion estimation and underwater 3D imaging by proposing relatively affordable methodologies and study different parameters affecting their performance. We introduce a new motion estimation framework for AUVs which relies on the successive acoustic images to infer AUV ego-motion. Also, we propose an Acoustic Stereo Imaging (ASI) system for underwater 3D reconstruction based on forward looking sonars; the proposed system features cheaper implementation than planar array sonars and solves the delay problem in T configured 3D sonars

    Deep-sea image processing

    Get PDF
    High-resolution seafloor mapping often requires optical methods of sensing, to confirm interpretations made from sonar data. Optical digital imagery of seafloor sites can now provide very high resolution and also provides additional cues, such as color information for sediments, biota and divers rock types. During the cruise AT11-7 of the Woods Hole Oceanographic Institution (WHOI) vessel R/V Atlantis (February 2004, East Pacific Rise) visual imagery was acquired from three sources: (1) a digital still down-looking camera mounted on the submersible Alvin, (2) observer-operated 1-and 3-chip video cameras with tilt and pan capabilities mounted on the front of Alvin, and (3) a digital still camera on the WHOI TowCam (Fornari, 2003). Imagery from the first source collected on a previous cruise (AT7-13) to the Galapagos Rift at 86°W was successfully processed and mosaicked post-cruise, resulting in a single image covering area of about 2000 sq.m, with the resolution of 3 mm per pixel (Rzhanov et al., 2003). This paper addresses the issues of the optimal acquisition of visual imagery in deep-seaconditions, and requirements for on-board processing. Shipboard processing of digital imagery allows for reviewing collected imagery immediately after the dive, evaluating its importance and optimizing acquisition parameters, and augmenting acquisition of data over specific sites on subsequent dives.Images from the deepsea power and light (DSPL) digital camera offer the best resolution (3.3 Mega pixels) and are taken at an interval of 10 seconds (determined by the strobe\u27s recharge rate). This makes images suitable for mosaicking only when Alvin moves slowly (≪1/4 kt), which is not always possible for time-critical missions. Video cameras provided a source of imagery more suitable for mosaicking, despite its inferiority in resolution. We discuss required pre-processing and imageenhancement techniques and their influence on the interpretation of mosaic content. An algorithm for determination of camera tilt parameters from acquired imagery is proposed and robustness conditions are discussed

    Underwater Motion Estimation Based on Acoustic Images and Deep Learning

    Get PDF
    This work develops techniques to estimate the motion of an underwater vehicle by processing acoustic images using deep learning (DL). For this, an underwater sonar simulator based on ray-tracing is designed and implemented. The simulator provides the ground truth data to train and validate proposed techniques. Several DL networks are implemented and compared to identify the most suitable for motion estimation using sonar images. The DL methods showed a much lower computation time and more accurate motion estimates compared to a deterministic algorithm. Further improvements of the DL methods are investigated by preprocessing the data before feeding it to the DL network. One technique converts sonar images into vectors by adding up the pixels in each row. This reduces the size of the DL networks. This technique showed significant reduction in the computation time of up to 10 times compared to techniques that use images. Another preprocessing technique divides the field of view (FoV) of a simulated sonar into four quadrants. An image is generated from each quadrant. This is combined with the vector technique by converting the images into vectors and grouping them together as the input of the DL network. The FoV division approach showed a high accuracy compared to using the whole FoV or different portions of it. Another motion estimation method presented in this work is enabled by full-duplex operation and rather than using images, it is based on DL analysis of time variation of complex-valued channel impulse responses. This technique can significantly reduce the acoustic hardware and processing complexity of the DL network and obtain a higher motion estimation accuracy, compared with techniques based on the processing of sonar images. The navigation accuracy of all the techniques is further illustrated by examples of estimation of complex trajectories using simulated and real data

    Place Recognition and Localization for Multi-Modal Underwater Navigation with Vision and Acoustic Sensors

    Full text link
    Place recognition and localization are important topics in both robotic navigation and computer vision. They are a key prerequisite for simultaneous localization and mapping (SLAM) systems, and also important for long-term robot operation when registering maps generated at different times. The place recognition and relocalization problem is more challenging in the underwater environment because of four main factors: 1) changes in illumination; 2) long-term changes in the physical appearance of features in the aqueous environment attributable to biofouling and the natural growth, death, and movement of living organisms; 3) low density of reliable visual features; and 4) low visibility in a turbid environment. There is no one perceptual modality for underwater vehicles that can single-handedly address all the challenges of underwater place recognition and localization. This thesis proposes novel research in place recognition methods for underwater robotic navigation using both acoustic and optical imaging modalities. We develop robust place recognition algorithms using both optical cameras and a Forward-looking Sonar (FLS) for an active visual SLAM system that addresses the challenges mentioned above. We first design an optical image matching algorithm using high-level features to evaluate image similarity against dramatic appearance changes and low image feature density. A localization algorithm is then built upon this method combining both image similarity and measurements from other navigation sensors, which enables a vehicle to localize itself to maps temporally separated over the span of years. Next, we explore the potential of FLS in the place recognition task. The weak feature texture and high noise level in sonar images increase the difficulty in making correspondences among them. We learn descriptive image-level features using a convolutional neural network (CNN) with the data collected for our ship hull inspection mission. These features present outstanding performance in sonar image matching, which can be used for effective loop-closure proposal for SLAM as well as multi-session SLAM registration. Building upon this, we propose a pre-linearization approach to leverage this type of general high-dimensional abstracted feature in a real-time recursive Bayesian filtering framework, which results in the first real-time recursive localization framework using this modality. Finally, we propose a novel pose-graph SLAM algorithm leveraging FLS as the perceptual sensors providing constraints for drift correction. In this algorithm, we address practical problems that arise when using an FLS for SLAM, including feature sparsity, low reliability in data association and geometry estimation. More specifically, we propose a novel approach to pruning out less-informative sonar frames that improve system efficiency and reliability. We also employ local bundle adjustment to optimize the geometric constraints between sonar frames and use the mechanism to avoid degenerate motion patterns. All the proposed contributions are evaluated with real-data collected for ship hull inspection. The experimental results outperform existent benchmarks. The culmination of these contributions is a system capable of performing underwater SLAM with both optical and acoustic imagery gathered across years under challenging imaging conditions.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/140835/1/ljlijie_1.pd

    Use of a single reference image in visual processing of polyhedral objects.

    Get PDF
    He Yong.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 69-72).Abstracts in English and Chinese.ABSTRACT --- p.iACKNOWLEDGEMENTS --- p.vTABLE OF CONTENTS --- p.viLIST OF FIGURES --- p.viiiLIST OF TABLES --- p.xChapter 1 --- INTRODUCTION --- p.1Chapter 2 --- PRELIMINARY --- p.6Chapter 3 --- IMAGE MOSAICING FOR SINGLY VISIBLE SURFACES --- p.9Chapter 3.1 --- Background --- p.9Chapter 3.2 --- Correspondence Inference Mechanism --- p.13Chapter 3.3 --- Seamless Lining up of Surface Boundary --- p.17Chapter 3.4 --- Experimental Result --- p.21Chapter 3.5 --- Summary of Image Mosaicing Work --- p.32Chapter 4 --- MOBILE ROBOT SELF-LOCALIZATION FROM MONOCULAR VISION --- p.33Chapter 4.1 --- Background --- p.33Chapter 4.2 --- Problem Definition --- p.37Chapter 4.3 --- Our Strategy of Localizing the Mobile Robot --- p.38Chapter 4.3.1 --- Establishing Correspondences --- p.40Chapter 4.3.2 --- Determining Position from Factorizing E-matrix --- p.49Chapter 4.3.3 --- Improvement on the Factorization Result --- p.55Chapter 4.4 --- Experimental Result --- p.56Chapter 4.5 --- Summary of Mobile Robot Self-localization Work --- p.62Chapter 5 --- CONCLUSION AND FUTURE WORK --- p.63APPENDIX --- p.67BIBLIOGRAPHY --- p.6
    • …
    corecore