2,336 research outputs found
An Underwater SLAM System using Sonar, Visual, Inertial, and Depth Sensor
This paper presents a novel tightly-coupled keyframe-based Simultaneous
Localization and Mapping (SLAM) system with loop-closing and relocalization
capabilities targeted for the underwater domain. Our previous work, SVIn,
augmented the state-of-the-art visual-inertial state estimation package OKVIS
to accommodate acoustic data from sonar in a non-linear optimization-based
framework. This paper addresses drift and loss of localization -- one of the
main problems affecting other packages in underwater domain -- by providing the
following main contributions: a robust initialization method to refine scale
using depth measurements, a fast preprocessing step to enhance the image
quality, and a real-time loop-closing and relocalization method using bag of
words (BoW). An additional contribution is the addition of depth measurements
from a pressure sensor to the tightly-coupled optimization formulation.
Experimental results on datasets collected with a custom-made underwater sensor
suite and an autonomous underwater vehicle from challenging underwater
environments with poor visibility demonstrate performance never achieved before
in terms of accuracy and robustness
Multi-Robot Exploration of Underwater Structures
This paper discusses a novel approach for the exploration of an underwater structure. A team of robots splits into two roles: certain robots approach the structure collecting detailed information (proximal observers) while the rest (distal observers) keep a distance providing an overview of the mission and assist in the localization of the proximal observers via a Cooperative Localization framework. Proximal observers utilize a novel robust switching model-based/visual-inertial odometry to overcome vision-based localization failures. Exploration strategies for the proximal and the distal observer are discussed.publishedVersio
High Definition, Inexpensive, Underwater Mapping
In this paper we present a complete framework for Underwater SLAM utilizing a
single inexpensive sensor. Over the recent years, imaging technology of action
cameras is producing stunning results even under the challenging conditions of
the underwater domain. The GoPro 9 camera provides high definition video in
synchronization with an Inertial Measurement Unit (IMU) data stream encoded in
a single mp4 file. The visual inertial SLAM framework is augmented to adjust
the map after each loop closure. Data collected at an artificial wreck of the
coast of South Carolina and in caverns and caves in Florida demonstrate the
robustness of the proposed approach in a variety of conditions.Comment: IEEE Internation Conference on Robotics and Automation, 202
Underwater Exploration and Mapping
This paper analyzes the open challenges of exploring and mapping in the underwater realm with the goal of identifying research opportunities that will enable an Autonomous Underwater Vehicle (AUV) to robustly explore different environments. A taxonomy of environments based on their 3D structure is presented together with an analysis on how that influences the camera placement. The difference between exploration and coverage is presented and how they dictate different motion strategies. Loop closure, while critical for the accuracy of the resulting map, proves to be particularly challenging due to the limited field of view and the sensitivity to viewing direction. Experimental results of enforcing loop closures in underwater caves demonstrate a novel navigation strategy. Dense 3D mapping, both online and offline, as well as other sensor configurations are discussed following the presented taxonomy. Experimental results from field trials illustrate the above analysis.acceptedVersio
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
A Survey of Positioning Systems Using Visible LED Lights
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.As Global Positioning System (GPS) cannot provide satisfying performance in indoor environments, indoor positioning technology, which utilizes indoor wireless signals instead of GPS signals, has grown rapidly in recent years. Meanwhile, visible light communication (VLC) using light devices such as light emitting diodes (LEDs) has been deemed to be a promising candidate in the heterogeneous wireless networks that may collaborate with radio frequencies (RF) wireless networks. In particular, light-fidelity has a great potential for deployment in future indoor environments because of its high throughput and security advantages. This paper provides a comprehensive study of a novel positioning technology based on visible white LED lights, which has attracted much attention from both academia and industry. The essential characteristics and principles of this system are deeply discussed, and relevant positioning algorithms and designs are classified and elaborated. This paper undertakes a thorough investigation into current LED-based indoor positioning systems and compares their performance through many aspects, such as test environment, accuracy, and cost. It presents indoor hybrid positioning systems among VLC and other systems (e.g., inertial sensors and RF systems). We also review and classify outdoor VLC positioning applications for the first time. Finally, this paper surveys major advances as well as open issues, challenges, and future research directions in VLC positioning systems.Peer reviewe
Underwater Localization in a Confined Space Using Acoustic Positioning and Machine Learning
Localization is a critical step in any navigation system. Through localization, the vehicle can estimate its position in the surrounding environment and plan how to reach its goal without any collision. This thesis focuses on underwater source localization, using sound signals for position estimation. We propose a novel underwater localization method based on machine learning techniques in which source position is directly estimated from collected acoustic data. The position of the sound source is estimated by training Random Forest (RF), Support Vector Machine (SVM), Feedforward Neural Network (FNN), and Convolutional Neural Network (CNN). To train these data-driven methods, data are collected inside a confined test tank with dimensions of 6m x 4.5m x 1.7m. The transmission unit, which includes Xilinx LX45 FPGA and transducer, generates acoustic signal. The receiver unit collects and prepares propagated sound signals and transmit them to a computer. It consists of 4 hydrophones, Red Pitay analog front-end board, and NI 9234 data acquisition board. We used MATLAB 2018 to extract pitch, Mel-Frequency Cepstrum Coefficients (MFCC), and spectrogram from the sound signals. These features are used by MATLAB Toolboxes to train RF, SVM, FNN, and CNN. Experimental results show that CNN archives 4% of Mean Absolute Percentage Error (MAPE) in the test tank. The finding of this research can pave the way for Autonomous Underwater Vehicle (AUV) and Remotely Operated Vehicle (ROV) navigation in underwater open spaces
- …