6 research outputs found

    Attitude-trajectory estimation for forward looking multi-beam sonar based on acoustic image registration

    Get PDF
    This work considers the processing of acoustic data from a multi-beam Forward Looking Sonar (FLS) on a moving underwater platform to estimate the platform’s attitude and trajectory. We propose an algorithm to produce an estimate of the attitude-trajectory for a FLS based on the optical flow between consecutive sonar frames. The attitude-trajectory can be used to locate an underwater platform, such as an Autonomous Underwater Vehicle (AUV), to a degree of accuracy suitable for navigation. It can also be used to build a mosaic of the underwater scene. The estimation is performed in three steps. Firstly, a selection of techniques based on the optical flow model are used to estimate a pixel displacement map (DM) between consecutive sonar frames represented in the native polar (range/bearing) format. The second step finds the best match between the estimated DM and DMs for a set of modeled sonar sensor motions. To reduce complexity, it is proposed to describe the DM with a small parameter vector derived from the displacement distribution. Thus, an estimate of the incremental sensor motion between frames is made. Finally, using a weighted regularized spline technique, the incremental inter-frame motions are integrated into an attitude-trajectory for the sonar sensor. To assess the accuracy of the attitude-trajectory estimate, it is used to register FLS frames from a field experiment dataset and build a high-quality mosaic of the underwater scene

    On 3-D Motion Estimation From Feature Tracks in 2-D FS Sonar Video

    No full text
    Visual odometry involves the computation of 3-D motion and (or) trajectory by tracking features in the video or image sequences recorded by the camera(s) on some autonomous terrestrial, aerial, and marine robotics platform. For exploration, mapping, inspection, and surveillance operations within turbid waters, high-frequency 2-D forward-scan sonar systems offer a significant advantage over cameras by providing both imagery with target details and attractive tradeoff in range, resolution, and data rate. Operating these at grazing incidence gives larger scene coverage and improved image quality due to the dominance of diffuse backscattered reflectance but induces cast shadows that are typically more distinct than brightness patterns due to the direct reflectance of casting objects. For the computation of 3-D motion by automatic video processing, the estimation accuracy and robustness can be enhanced by integrating the visual cues from shadow dynamics with the image flow of stationary 3-D objects, both induced by sonar motion. In this paper, we present the mathematical models of image flow for 3-D objects and their cast shadows, utilize them in devising various 3-D sonar motion estimation solutions, and study their robustness. We present results of experiments with both synthetic and real data in order to assess the accuracy and performance of these methods

    Internet of Underwater Things and Big Marine Data Analytics -- A Comprehensive Survey

    Full text link
    The Internet of Underwater Things (IoUT) is an emerging communication ecosystem developed for connecting underwater objects in maritime and underwater environments. The IoUT technology is intricately linked with intelligent boats and ships, smart shores and oceans, automatic marine transportations, positioning and navigation, underwater exploration, disaster prediction and prevention, as well as with intelligent monitoring and security. The IoUT has an influence at various scales ranging from a small scientific observatory, to a midsized harbor, and to covering global oceanic trade. The network architecture of IoUT is intrinsically heterogeneous and should be sufficiently resilient to operate in harsh environments. This creates major challenges in terms of underwater communications, whilst relying on limited energy resources. Additionally, the volume, velocity, and variety of data produced by sensors, hydrophones, and cameras in IoUT is enormous, giving rise to the concept of Big Marine Data (BMD), which has its own processing challenges. Hence, conventional data processing techniques will falter, and bespoke Machine Learning (ML) solutions have to be employed for automatically learning the specific BMD behavior and features facilitating knowledge extraction and decision support. The motivation of this paper is to comprehensively survey the IoUT, BMD, and their synthesis. It also aims for exploring the nexus of BMD with ML. We set out from underwater data collection and then discuss the family of IoUT data communication techniques with an emphasis on the state-of-the-art research challenges. We then review the suite of ML solutions suitable for BMD handling and analytics. We treat the subject deductively from an educational perspective, critically appraising the material surveyed.Comment: 54 pages, 11 figures, 19 tables, IEEE Communications Surveys & Tutorials, peer-reviewed academic journa

    Place Recognition and Localization for Multi-Modal Underwater Navigation with Vision and Acoustic Sensors

    Full text link
    Place recognition and localization are important topics in both robotic navigation and computer vision. They are a key prerequisite for simultaneous localization and mapping (SLAM) systems, and also important for long-term robot operation when registering maps generated at different times. The place recognition and relocalization problem is more challenging in the underwater environment because of four main factors: 1) changes in illumination; 2) long-term changes in the physical appearance of features in the aqueous environment attributable to biofouling and the natural growth, death, and movement of living organisms; 3) low density of reliable visual features; and 4) low visibility in a turbid environment. There is no one perceptual modality for underwater vehicles that can single-handedly address all the challenges of underwater place recognition and localization. This thesis proposes novel research in place recognition methods for underwater robotic navigation using both acoustic and optical imaging modalities. We develop robust place recognition algorithms using both optical cameras and a Forward-looking Sonar (FLS) for an active visual SLAM system that addresses the challenges mentioned above. We first design an optical image matching algorithm using high-level features to evaluate image similarity against dramatic appearance changes and low image feature density. A localization algorithm is then built upon this method combining both image similarity and measurements from other navigation sensors, which enables a vehicle to localize itself to maps temporally separated over the span of years. Next, we explore the potential of FLS in the place recognition task. The weak feature texture and high noise level in sonar images increase the difficulty in making correspondences among them. We learn descriptive image-level features using a convolutional neural network (CNN) with the data collected for our ship hull inspection mission. These features present outstanding performance in sonar image matching, which can be used for effective loop-closure proposal for SLAM as well as multi-session SLAM registration. Building upon this, we propose a pre-linearization approach to leverage this type of general high-dimensional abstracted feature in a real-time recursive Bayesian filtering framework, which results in the first real-time recursive localization framework using this modality. Finally, we propose a novel pose-graph SLAM algorithm leveraging FLS as the perceptual sensors providing constraints for drift correction. In this algorithm, we address practical problems that arise when using an FLS for SLAM, including feature sparsity, low reliability in data association and geometry estimation. More specifically, we propose a novel approach to pruning out less-informative sonar frames that improve system efficiency and reliability. We also employ local bundle adjustment to optimize the geometric constraints between sonar frames and use the mechanism to avoid degenerate motion patterns. All the proposed contributions are evaluated with real-data collected for ship hull inspection. The experimental results outperform existent benchmarks. The culmination of these contributions is a system capable of performing underwater SLAM with both optical and acoustic imagery gathered across years under challenging imaging conditions.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/140835/1/ljlijie_1.pd

    Umgebungskartenschätzung aus Sidescan-Sonardaten für ein autonomes Unterwasserfahrzeug

    Get PDF
    Für die Schätzung der Höhenkarten aus Sidescan-Sonardaten liefert die Arbeit mehrere Beiträge: Ein neues Schätzverfahren, das Sonarmessungen aus vorberechneten Sonarantworten von Basiselementen, sog. Kerneln, zusammensetzt und so zu einer Höhenschätzung gelangt. Des Weiteren ein dreidimensionales Verfahren, das auf Markov Random Fields basiert und eine Sidescan-Sonarsimulationsumgebung für beliebige dreidimensionale Szenen, die auch verschiedene Sonaraufnahmemodi und Terraingeneratoren bietet
    corecore