9 research outputs found

    Optimized environment exploration for autonomous underwater vehicles

    Get PDF
    Achieving full autonomous robotic environment exploration in the underwater domain is very challenging, mainly due to noisy acoustic sensors, high localization error, control disturbances of the water and lack of accurate un- derwater maps. In this work we present a robotic exploration algorithm for underwater vehicles that does not rely on prior information about the environment. Our method has been greatly influenced by many robotic exploration, view planning and path planning algorithms. The proposed method constitutes a significant improvement over our previous work [1]: Firstly, we refine our exploration approach to improve robustness; Secondly, we propose an alternative map representation based on the quadtree data structure that allows different relevant queries to be performed efficiently, reducing the computational cost of the viewpoint generation process; Thirdly, we present an algorithm that is capable of generating consistent maps even when noisy sonar data is used. The aforementioned contributions have increased the reliability of the algorithm, allowing new real experiments performed in artificial structures but also in more challenging natural environments, from which we provide a 3D reconstruction to show that with this algorithm full optical coverage is obtained

    Online view planning for inspecting unexplored underwater structures

    Get PDF
    In this paper, we propose a method to automate the exploration of unknown underwater structures for autonomous underwater vehicles (AUVs). The proposed algorithm iteratively incorporates exteroceptive sensor data and replans the next-best-view (NBV) in order to fully map an underwater structure. This approach does not require prior environment information. However, a safe exploration depth and the exploration area (defined by a bounding box, parametrized by its size, location and resolution) must be provided by the user. The algorithm operates online by iteratively conducting the following three tasks: 1) Profiling sonar data is firstly incorporated into a 2-dimensional (2D) grid map, where voxels are labeled according to their state (a voxel can be labeled as empty, unseen, occluded, occplane, occupied or viewed). 2) Useful viewpoints to continue exploration are generated according to the map. 3) A safe path is generated to guide the robot towards the next viewpoint location. Two sensors are used in this approach: a scanning profiling sonar, which is used to build an occupancy map of the surroundings, and an optical camera, which acquires optical data of the scene. Finally, in order to demonstrate the feasibility of our approach we provide real-world results using the Sparus II AUV

    Two-dimensional frontier-based viewpoint generation for exploring and mapping underwater environments

    Get PDF
    To autonomously explore complex underwater environments, it is convenient to develop motion planning strategies that do not depend on prior information. In this publication, we present a robotic exploration algorithm for autonomous underwater vehicles (AUVs) that is able to guide the robot so that it explores an unknown 2-dimensional (2D) environment. The algorithm is built upon view planning (VP) and frontier-based (FB) strategies. Traditional robotic exploration algorithms seek full coverage of the scene with data from only one sensor. If data coverage is required for multiple sensors, multiple exploration missions are required. Our approach has been designed to sense the environment achieving full coverage with data from two sensors in a single exploration mission: occupancy data from the profiling sonar, from which the shape of the environment is perceived, and optical data from the camera, to capture the details of the environment. This saves time and mission costs. The algorithm has been designed to be computationally efficient, so that it can run online in the AUV’s onboard computer. In our approach, the environment is represented using a labeled quadtree occupancy map which, at the same time, is used to generate the viewpoints that guide the exploration. We have tested the algorithm in different environments through numerous experiments, which include sea operations using the Sparus II AUV and its sensor suite

    Autonomous underwater navigation and optical mapping in unknown natural environments

    Get PDF
    We present an approach for navigating in unknown environments while, simultaneously, gathering information for inspecting underwater structures using an autonomous underwater vehicle (AUV). To accomplish this, we first use our pipeline for mapping and planning collision-free paths online, which endows an AUV with the capability to autonomously acquire optical data in close proximity. With that information, we then propose a reconstruction pipeline to create a photo-realistic textured 3D model of the inspected area. These 3D models are also of particular interest to other fields of study in marine sciences, since they can serve as base maps for environmental monitoring, thus allowing change detection of biological communities and their environment over time. Finally, we evaluate our approach using the Sparus II, a torpedo-shaped AUV, conducting inspection missions in a challenging, real-world and natural scenario

    Fast incremental bundle adjustment with covariance recovery

    No full text
    Efficient algorithms exist to obtain a sparse 3D representation of the environment. Bundle adjustment (BA) and structure from motion (SFM) are techniques used to estimate both the camera poses and the set of sparse points in the environment. Many applications require such reconstruction to be performed online, while acquiring the data, and produce an updated result every step. Furthermore, using active feedback about the quality of the reconstruction can help selecting the best views to increase the accuracy as well as to maintain a reasonable size of the collected data. This paper provides novel and efficient solutions to solving the associated NLS incrementally, and to compute not only the optimal solution, but also the associated uncertainty. The proposed technique highly increases the efficiency of the incremental BA solver for long camera trajectory applications, and provides extremely fast covariance recovery.This research was supported by the ARC through the “Australian Centre of Excellence for Robotic Vision” CE140100016 and by the Ministry of Education, Youth and Sports of the Czech Republic from the NPU II project IT4Innovations excellence in science (LQ1602), the TA-CR Competence Centres project V3C Visual Computing Competence Center (no. TE01020415) and research project no. VI20172020068

    Hyperspectral 3D Mapping of Underwater Environments

    No full text
    Hyperspectral imaging has been increasingly used for underwater survey applications over the past years. As many hyperspectral cameras work as push-broom scanners, their use is usually limited to the creation of photo-mosaics based on a flat surface approximation and by interpolat- ing the camera pose from dead-reckoning navigation. Yet, because of drift in the navigation and the mostly wrong flat surface assumption, the quality of the obtained photo- mosaics is often too low to support adequate analysis. In this paper we present an initial method for creating hyper- spectral 3D reconstructions of underwater environments. By fusing the data gathered by a classical RGB camera, an inertial navigation system and a hyperspectral push- broom camera, we show that the proposed method creates highly accurate 3D reconstructions with hyperspectral tex- tures. We propose to combine techniques from simultaneous localization and mapping, structure-from-motion and 3D reconstruction and advantageously use them to create 3D models with hyperspectral texture, allowing us to overcome the flat surface assumption and the classical limitation of dead-reckoning navigation

    Combined use of a frame and a linear pushbroom camera for deep-sea 3D hyperspectral mapping

    No full text
    Hyperspectral (HS) imaging produces an image of an object across a large range of the visible spectrum, and not just the primary colors (R, G, B) of conventional cameras. It can provide valuable information for object detection, analysis of materials and processes in environmental science in the deep-sea, especially for the study of benthic environments and pollution monitoring. In this paper, we address the problem of camera calibration towards 3D hyperspectral mapping where GPS is not available, and the platform navigational sensors are not accurate enough to allow direct georeferencing of linear sensors, as is the case with traditional aerial platform methods. Our approach presents a preliminary method for 3D hyperspectral mapping that uses only image processing techniques to reduce reliance on GPS or navigation sensors. The method is based on the use of standard RGB camera coupled with the hyperspectral pushbroom camera. The main contribution is the implementation and preliminary testing of a method to relate the two cameras using image information alone. The experiments presented in this paper analyze the estimation of relative orientation and time synchronization parameters for both cameras through experiments based on epipolar geometry and Monte-Carlo simulation. All methods are designed to work with real world data
    corecore