17 research outputs found

    Large Area 3D Reconstructions from Underwater Surveys

    Full text link
    Robotic underwater vehicles can perform vast optical surveys of the ocean floor. Scientists value these surveys since optical images offer high levels of information and are easily interpreted by humans. Unfortunately the coverage of a single image is limited hy absorption and backscatter while what is needed is an overall view of the survey area. Recent work on underwater mosaics assume planar scenes and are applicable only to Situations without much relief. We present a complete and validated system for processing optical images acquired from an underwater mbotic vehicle to form a 3D reconstruction of the wean floor. Our approach is designed for the most general conditions of wide-baseline imagery (low overlap and presence of significant 3D structure) and scales to hundreds of images. We only assume a calibrated camera system and a vehicle with uncertain and possibly drifting pose information (e.g. a compass, depth sensor and a Doppler velocity Our approach is based on a combination of techniques from computer vision, photogrammetry and mhotics. We use a local to global approach to structure from motion, aided by the navigation sensors on the vehicle to generate 3D suhmaps. These suhmaps are then placed in a common reference frame that is refined by matching overlapping submaps. The final stage of processing is a bundle adjustment that provides the 3D structure, camera poses and uncertainty estimates in a consistent reference frame. We present results with ground-truth for structure as well as results from an oceanographic survey over a coral reef covering an area of appmximately one hundred square meters.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86037/1/opizarro-33.pd

    Heuristic method based on voting for extrinsic orientation through image epipolarization

    Get PDF
    [EN] Traditionally, the stereo-pair rectification, also known as epipolarization problem, (i.e., the projection of both images onto a common image plane) is solved once both intrinsic (interior) and extrinsic (exterior) orientation parameters are known. A heuristic method is proposed to solve both the extrinsic orientation problem and the epipolarization problem in just one single step. The algorithm uses the main property of a coplanar stereopair as fitness criteria: null vertical parallax between corresponding points to achieve the best stereopair. Using an iterative approach, each pair of corresponding points will vote for a rotation axis that may reduce vertical parallax. The votes will be weighted, the rotation applied, and an iteration will be carried out, until the vertical parallax residual error is below a threshold. The algorithm performance and accuracy are checked using both simulated and real case examples. In addition, its results are compared with those obtained using a traditional nonlinear least-squares adjustment based on the coplanarity condition. The heuristic methodology is robust, fast, and yields optimal results.The authors gratefully acknowledge the support from the Spanish Ministerio de Economia y Competitividad to the Project No. HAR2014-59873-R.Martín, S.; Lerma García, JL.; Uzkeda, H. (2017). Heuristic method based on voting for extrinsic orientation through image epipolarization. Journal of Electronic Imaging. 26(6):063020-1-063020-11. https://doi.org/10.1117/1.JEI.26.6.063020S063020-1063020-1126

    Exactly Sparse Delayed-State Filters

    Full text link
    This paper presents the novel insight that the SLAM information matrix is exactly sparse in a delayed-state framework. Such a framework is used in view-based representations of the environment which rely upon scan-matching raw sensor data. Scan-matching raw data results in virtual observations of robot motion with respect to a place its previously been. The exact sparseness of the delayed-state information matrix is in contrast to other recent feature based SLAM information algorithms like Sparse Extended Information Filters or Thin Junction Tree Filters. These methods have to make approximations in order to force the feature-based SLAM information matrix to be sparse. The benefit of the exact sparseness of the delayed-state framework is that it allows one to take advantage of the information space parameterization without having to make any approximations. Therefore, it can produce equivalent results to the “full-covariance” solution.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86061/1/reustice-29.pd

    2-Point-based Outlier Rejection for Camera-Imu Systems with applications to Micro Aerial Vehicles

    Get PDF
    International audienceThis paper presents a novel method to perform the outlier rejection task between two different views of a camera rigidly attached to an Inertial Measurement Unit (IMU). Only two feature correspondences and gyroscopic data from IMU measurerments are used to compute the motion hypothesis. By exploiting this 2-point motion parametrization, we propose two algorithms to remove wrong data associations in the featurematching process for case of a 6DoF motion. We show that in the case of a monocular camera mounted on a quadrotor vehicle, motion priors from IMU can be used to discard wrong estimations in the framework of a 2-point-RANSAC based approach. The proposed methods are evaluated on both synthetic and real data

    Towards High-resolution Imaging from Underwater Vehicles

    Full text link
    Large area mapping at high resolution underwater continues to be constrained by sensor-level environmental constraints and the mismatch between available navigation and sensor accuracy. In this paper, advances are presented that exploit aspects of the sensing modality, and consistency and redundancy within local sensor measurements to build high-resolution optical and acoustic maps that are a consistent representation of the environment. This work is presented in the context of real-world data acquired using autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs) working in diverse applications including shallow water coral reef surveys with the Seabed AUV, a forensic survey of the RMS Titanic in the North Atlantic at a depth of 4100 m using the Hercules ROV, and a survey of the TAG hydrothermal vent area in the mid-Atlantic at a depth of 3600 m using the Jason II ROV. Specifically, the focus is on the related problems of structure from motion from underwater optical imagery assuming pose instrumented calibrated cameras. General wide baseline solutions are presented for these problems based on the extension of techniques from the simultaneous localization and mapping (SLAM), photogrammetric and the computer vision communities. It is also examined how such techniques can be extended for the very different sensing modality and scale associated with multi-beam bathymetric mapping. For both the optical and acoustic mapping cases it is also shown how the consistency in mapping can be used not only for better global mapping, but also to refine navigation estimates.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86051/1/hsingh-21.pd

    1-Point-based Monocular Motion Estimation for Computationally-Limited Micro Aerial Vehicles

    Get PDF
    International audienceWe propose a novel method to estimate the relative motion between two consecutive camera views, which only requires the observation of a single feature in the scene and the knowledge of the angular rates from an inertial measurement unit, under the assumption that the local camera motion lies in a plane perpendicular to the gravity vector. Using this 1- point motion parametrization, we provide two very efficient algorithms to remove the outliers of the feature-matching process. Thanks to their inherent efficiency, the proposed algorithms are very suitable for computationally-limited robots. We test the proposed approaches on both synthetic and real data, using video footage from a small flying quadrotor. We show that our methods outperform standard RANSAC-based implementations by up to two orders of magnitude in speed, while being able to identify the majority of the inliers

    Exactly Sparse Delayed-State Filters for View-Based SLAM

    Get PDF
    This paper reports the novel insight that the simultaneous localization and mapping (SLAM) information matrix is exactly sparse in a delayed-state framework. Such a framework is used in view-based representations of the environment that rely upon scan-matching raw sensor data to obtain virtual observations of robot motion with respect to a place it has previously been. The exact sparseness of the delayed-state information matrix is in contrast to other recent feature-based SLAM information algorithms, such as sparse extended information filter or thin junction-tree filter, since these methods have to make approximations in order to force the feature-based SLAM information matrix to be sparse. The benefit of the exact sparsity of the delayed-state framework is that it allows one to take advantage of the information space parameterization without incurring any sparse approximation error. Therefore, it can produce equivalent results to the full-covariance solution. The approach is validated experimentally using monocular imagery for two datasets: a test-tank experiment with ground truth, and a remotely operated vehicle survey of the RMS Titanic.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86062/1/reustice-25.pd

    Visually Augmented Navigation for Autonomous Underwater Vehicles

    Get PDF
    As autonomous underwater vehicles (AUVs) are becoming routinely used in an exploratory context for ocean science, the goal of visually augmented navigation (VAN) is to improve the near-seafloor navigation precision of such vehicles without imposing the burden of having to deploy additional infrastructure. This is in contrast to traditional acoustic long baseline navigation techniques, which require the deployment, calibration, and eventual recovery of a transponder network. To achieve this goal, VAN is formulated within a vision-based simultaneous localization and mapping (SLAM) framework that exploits the systems-level complementary aspects of a camera and strap-down sensor suite. The result is an environmentally based navigation technique robust to the peculiarities of low-overlap underwater imagery. The method employs a view-based representation where camera-derived relative-pose measurements provide spatial constraints, which enforce trajectory consistency and also serve as a mechanism for loop closure, allowing for error growth to be independent of time for revisited imagery. This article outlines the multisensor VAN framework and demonstrates it to have compelling advantages over a purely vision-only approach by: 1) improving the robustness of low-overlap underwater image registration; 2) setting the free gauge scale; and 3) allowing for a disconnected camera-constraint topology.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86054/1/reustice-16.pd
    corecore