1,716 research outputs found
Underwater Exploration and Mapping
This paper analyzes the open challenges of exploring and mapping in the underwater realm with the goal of identifying research opportunities that will enable an Autonomous Underwater Vehicle (AUV) to robustly explore different environments. A taxonomy of environments based on their 3D structure is presented together with an analysis on how that influences the camera placement. The difference between exploration and coverage is presented and how they dictate different motion strategies. Loop closure, while critical for the accuracy of the resulting map, proves to be particularly challenging due to the limited field of view and the sensitivity to viewing direction. Experimental results of enforcing loop closures in underwater caves demonstrate a novel navigation strategy. Dense 3D mapping, both online and offline, as well as other sensor configurations are discussed following the presented taxonomy. Experimental results from field trials illustrate the above analysis.acceptedVersio
MULTI-CAMERA SYSTEM CALIBRATION OF A LOW-COST REMOTELY OPERATED VEHICLE FOR UNDERWATER CAVE EXPLORATION
Exploration, documentation and mapping of underwater environment is one of the biggest open challenges for science and engineering. Humankind is not naturally designed to operate in water and, despite the enormous technological advancement that offers nowadays unprecedented opportunities, diving and working underwater is still very dangerous, especially in confined spaces such as underwater caves. Great research efforts are currently devoted to underwater autonomous navigation, but available solutions still mainly rely on complex and expensive systems, due to the difficulty of adapting localization and mapping sensors and algorithms suited for terrestrial or aerial applications. However, small and affordable underwater remotely operated vehicles (ROVs) are available, which offer good opportunities for underwater exploration and mapping. This paper focuses on the development of a small, low-cost ROV designed for 3D mapping of underwater environments, like caves. The system is based on a commercially available vehicle, the BluRov2, and relies on the use of up to 12 action cameras (GoPro) mounted on it. A trifocal camera system for underwater real-time visual odometry can also be included. The work describes the photogrammetric procedure developed for the synchronization and calibration of the GoPro cameras and provides a thorough analysis on the achievable results
An Underwater SLAM System using Sonar, Visual, Inertial, and Depth Sensor
This paper presents a novel tightly-coupled keyframe-based Simultaneous
Localization and Mapping (SLAM) system with loop-closing and relocalization
capabilities targeted for the underwater domain. Our previous work, SVIn,
augmented the state-of-the-art visual-inertial state estimation package OKVIS
to accommodate acoustic data from sonar in a non-linear optimization-based
framework. This paper addresses drift and loss of localization -- one of the
main problems affecting other packages in underwater domain -- by providing the
following main contributions: a robust initialization method to refine scale
using depth measurements, a fast preprocessing step to enhance the image
quality, and a real-time loop-closing and relocalization method using bag of
words (BoW). An additional contribution is the addition of depth measurements
from a pressure sensor to the tightly-coupled optimization formulation.
Experimental results on datasets collected with a custom-made underwater sensor
suite and an autonomous underwater vehicle from challenging underwater
environments with poor visibility demonstrate performance never achieved before
in terms of accuracy and robustness
CaveSeg: Deep Semantic Segmentation and Scene Parsing for Autonomous Underwater Cave Exploration
In this paper, we present CaveSeg - the first visual learning pipeline for
semantic segmentation and scene parsing for AUV navigation inside underwater
caves. We address the problem of scarce annotated training data by preparing a
comprehensive dataset for semantic segmentation of underwater cave scenes. It
contains pixel annotations for important navigation markers (e.g. caveline,
arrows), obstacles (e.g. ground plain and overhead layers), scuba divers, and
open areas for servoing. Through comprehensive benchmark analyses on cave
systems in USA, Mexico, and Spain locations, we demonstrate that robust deep
visual models can be developed based on CaveSeg for fast semantic scene parsing
of underwater cave environments. In particular, we formulate a novel
transformer-based model that is computationally light and offers near real-time
execution in addition to achieving state-of-the-art performance. Finally, we
explore the design choices and implications of semantic segmentation for visual
servoing by AUVs inside underwater caves. The proposed model and benchmark
dataset open up promising opportunities for future research in autonomous
underwater cave exploration and mapping.Comment: submitted for review in ICRA 2024. 10 pages, 9 figure
3D virtualization of an underground semi-submerged cave system
Underwater caves represent the most challenging scenario for exploration, mapping and 3D modelling. In such complex environment, unsuitable to humans, highly specialized skills and expensive equipment are normally required. Technological progress and scientific innovation attempt, nowadays, to develop safer and more automatic approaches for the virtualization of these complex and not easily accessible environments, which constitute a unique natural, biological and cultural heritage. This paper presents a pilot study realised for the virtualization of 'Grotta Giusti' (Fig. 1), an underground semi-submerged cave system in central Italy. After an introduction on the virtualization process in the cultural heritage domain and a review of techniques and experiences for the virtualization of underground and submerged environments, the paper will focus on the employed virtualization techniques. In particular, the developed approach to
simultaneously survey the semi-submersed areas of the cave relying on a stereo camera system and the virtualization of the virtual cave will be discussed
Toward autonomous exploration in confined underwater environments
Author Posting. © The Author(s), 2015. This is the author's version of the work. It is posted here by permission of John Wiley & Sons for personal use, not for redistribution. The definitive version was published in Journal of Field Robotics 33 (2016): 994-1012, doi:10.1002/rob.21640.In this field note we detail the operations and discuss the results of an experiment conducted
in the unstructured environment of an underwater cave complex, using an autonomous underwater vehicle (AUV). For this experiment the AUV was equipped with two acoustic
sonar to simultaneously map the caves’ horizontal and vertical surfaces. Although the
caves’ spatial complexity required AUV guidance by a diver, this field deployment successfully demonstrates a scan matching algorithm in a simultaneous localization and mapping (SLAM) framework that significantly reduces and bounds the localization error for fully
autonomous navigation. These methods are generalizable for AUV exploration in confined
underwater environments where surfacing or pre-deployment of localization equipment are
not feasible and may provide a useful step toward AUV utilization as a response tool in
confined underwater disaster areas.This research work was partially sponsored by the EU FP7-Projects: Tecniospring-
Marie Curie (TECSPR13-1-0052), MORPH (FP7-ICT-2011-7-288704), Eurofleets2 (FP7-INF-2012-312762),
and the National Science Foundation (OCE-0955674)
High Definition, Inexpensive, Underwater Mapping
In this paper we present a complete framework for Underwater SLAM utilizing a
single inexpensive sensor. Over the recent years, imaging technology of action
cameras is producing stunning results even under the challenging conditions of
the underwater domain. The GoPro 9 camera provides high definition video in
synchronization with an Inertial Measurement Unit (IMU) data stream encoded in
a single mp4 file. The visual inertial SLAM framework is augmented to adjust
the map after each loop closure. Data collected at an artificial wreck of the
coast of South Carolina and in caverns and caves in Florida demonstrate the
robustness of the proposed approach in a variety of conditions.Comment: IEEE Internation Conference on Robotics and Automation, 202
A Multi-Sensor Fusion-Based Underwater Slam System
This dissertation addresses the problem of real-time Simultaneous Localization and Mapping (SLAM) in challenging environments. SLAM is one of the key enabling technologies for autonomous robots to navigate in unknown environments by processing information on their on-board computational units. In particular, we study the exploration of challenging GPS-denied underwater environments to enable a wide range of robotic applications, including historical studies, health monitoring of coral reefs, underwater infrastructure inspection e.g., bridges, hydroelectric dams, water supply systems, and oil rigs. Mapping underwater structures is important in several fields, such as marine archaeology, Search and Rescue (SaR), resource management, hydrogeology, and speleology. However, due to the highly unstructured nature of such environments, navigation by human divers could be extremely dangerous, tedious, and labor intensive. Hence, employing an underwater robot is an excellent fit to build the map of the environment while simultaneously localizing itself in the map.
The main contribution of this dissertation is the design and development of a real-time robust SLAM algorithm for small and large scale underwater environments. SVIn – a novel tightly-coupled keyframe-based non-linear optimization framework fusing Sonar, Visual, Inertial and water depth information with robust initialization, loop-closing, and relocalization capabilities has been presented. Introducing acoustic range information to aid the visual data, shows improved reconstruction and localization. The availability of depth information from water pressure enables a robust initialization and refines the scale factor, as well as assists to reduce the drift for the tightly-coupled integration. The complementary characteristics of these sensing v modalities provide accurate and robust localization in unstructured environments with low visibility and low visual features – as such make them the ideal choice for underwater navigation. The proposed system has been successfully tested and validated in both benchmark datasets and numerous real world scenarios. It has also been used for planning for underwater robot in the presence of obstacles. Experimental results on datasets collected with a custom-made underwater sensor suite and an autonomous underwater vehicle (AUV) Aqua2 in challenging underwater environments with poor visibility, demonstrate performance never achieved before in terms of accuracy and robustness. To aid the sparse reconstruction, a contour-based reconstruction approach utilizing the well defined edges between the well lit area and darkness has been developed. In particular, low lighting conditions, or even complete absence of natural light inside caves, results in strong lighting variations, e.g., the cone of the artificial video light intersecting underwater structures and the shadow contours. The proposed method utilizes these contours to provide additional features, resulting into a denser 3D point cloud than the usual point clouds from a visual odometry system. Experimental results in an underwater cave demonstrate the performance of our system. This enables more robust navigation of autonomous underwater vehicles using the denser 3D point cloud to detect obstacles and achieve higher resolution reconstructions
Weakly Supervised Caveline Detection For AUV Navigation Inside Underwater Caves
Underwater caves are challenging environments that are crucial for water
resource management, and for our understanding of hydro-geology and history.
Mapping underwater caves is a time-consuming, labor-intensive, and hazardous
operation. For autonomous cave mapping by underwater robots, the major
challenge lies in vision-based estimation in the complete absence of ambient
light, which results in constantly moving shadows due to the motion of the
camera-light setup. Thus, detecting and following the caveline as navigation
guidance is paramount for robots in autonomous cave mapping missions. In this
paper, we present a computationally light caveline detection model based on a
novel Vision Transformer (ViT)-based learning pipeline. We address the problem
of scarce annotated training data by a weakly supervised formulation where the
learning is reinforced through a series of noisy predictions from intermediate
sub-optimal models. We validate the utility and effectiveness of such weak
supervision for caveline detection and tracking in three different cave
locations: USA, Mexico, and Spain. Experimental results demonstrate that our
proposed model, CL-ViT, balances the robustness-efficiency trade-off, ensuring
good generalization performance while offering 10+ FPS on single-board (Jetson
TX2) devices
- …