100 research outputs found

    Sensor Fusion of Structure-from-Motion, Bathymetric 3D, and Beacon-Based Navigation Modalities

    Full text link
    This paper describes an approach for the fusion of 30 data underwater obtained from multiple sensing modalities. In particular, we examine the combination of imagebased Structure-From-Motion (SFM) data with bathymetric data obtained using pencil-beam underwater sonar, in order to recover the shape of the seabed terrain. We also combine image-based egomotion estimation with acousticbased and inertial navigation data on board the underwater vehicle. We examine multiple types of fusion. When fusion is pe?$ormed at the data level, each modality is used to extract 30 information independently. The 30 representations are then aligned and compared. In this case, we use the bathymetric data as ground truth to measure the accuracy and drijl of the SFM approach. Similarly we use the navigation data as ground truth against which we measure the accuracy of the image-based ego-motion estimation. To our knowledge, this is the frst quantitative evaluation of image-based SFM and egomotion accuracy in a large-scale outdoor environment. Fusion at the signal level uses the raw signals from multiple sensors to produce a single coherent 30 representation which takes optimal advantage of the sensors' complementary strengths. In this papel; we examine how lowresolution bathymetric data can be used to seed the higherresolution SFM algorithm, improving convergence rates, and reducing drift error. Similarly, acoustic-based and inertial navigation data improves the convergence and driji properties of egomotion estimation.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86044/1/hsingh-35.pd

    Low cost underwater acoustic localization

    Full text link
    Over the course of the last decade, the cost of marine robotic platforms has significantly decreased. In part this has lowered the barriers to entry of exploring and monitoring larger areas of the earth's oceans. However, these advances have been mostly focused on autonomous surface vehicles (ASVs) or shallow water autonomous underwater vehicles (AUVs). One of the main drivers for high cost in the deep water domain is the challenge of localizing such vehicles using acoustics. A low cost one-way travel time underwater ranging system is proposed to assist in localizing deep water submersibles. The system consists of location aware anchor buoys at the surface and underwater nodes. This paper presents a comparison of methods together with details on the physical implementation to allow its integration into a deep sea micro AUV currently in development. Additional simulation results show error reductions by a factor of three.Comment: 73rd Meeting of the Acoustical Society of Americ

    Egomotion estimation using binocular spatiotemporal oriented energy

    Get PDF
    Camera egomotion estimation is concerned with the recovery of a camera's motion (e.g., instantaneous translation and rotation) as it moves through its environment. It has been demonstrated to be of both theoretical and practical interest. This thesis documents a novel algorithm for egomotion estimation based on binocularly matched spatiotemporal oriented energy distributions. Basing the estimation on oriented energy measurements makes it possible to recover egomotion without the need to establish temporal correspondences or convert disparity into 3D world coordinates. There sulting algorithm has been realized in software and evaluated quantitatively on a novel laboratory dataset with ground truth as well as qualitatively on both indoor and outdoor real-world datasets. Performance is evaluated relative to comparable alternative algorithms and shown to exhibit best overall performance

    PROBE-GK: Predictive Robust Estimation using Generalized Kernels

    Full text link
    Many algorithms in computer vision and robotics make strong assumptions about uncertainty, and rely on the validity of these assumptions to produce accurate and consistent state estimates. In practice, dynamic environments may degrade sensor performance in predictable ways that cannot be captured with static uncertainty parameters. In this paper, we employ fast nonparametric Bayesian inference techniques to more accurately model sensor uncertainty. By setting a prior on observation uncertainty, we derive a predictive robust estimator, and show how our model can be learned from sample images, both with and without knowledge of the motion used to generate the data. We validate our approach through Monte Carlo simulations, and report significant improvements in localization accuracy relative to a fixed noise model in several settings, including on synthetic data, the KITTI dataset, and our own experimental platform.Comment: In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'16), Stockholm, Sweden, May 16-21, 201

    A Robust Approach for Monocular Visual Odometry in Underwater Environments

    Get PDF
    This work presents a visual odometric system for camera tracking in underwater scenarios of the seafloor which are strongly perturbed with sunlight caustics and cloudy water. Particularly, we focuse on the performance and robustnes of the system, which structurally associates a deflickering filter with a visual tracker. Two state-of-the-art trackers are employed for our study, one pixel-oriented and the other feature-based. The contrivances of the trackers were crumbled and their suitability for underwater environments analyzed comparatively. To this end real subaquatic footages in perturbed environments were employed.Sociedad Argentina de Informática e Investigación Operativ

    Single and multiple stereo view navigation for planetary rovers

    Get PDF
    © Cranfield UniversityThis thesis deals with the challenge of autonomous navigation of the ExoMars rover. The absence of global positioning systems (GPS) in space, added to the limitations of wheel odometry makes autonomous navigation based on these two techniques - as done in the literature - an inviable solution and necessitates the use of other approaches. That, among other reasons, motivates this work to use solely visual data to solve the robot’s Egomotion problem. The homogeneity of Mars’ terrain makes the robustness of the low level image processing technique a critical requirement. In the first part of the thesis, novel solutions are presented to tackle this specific problem. Detection of robust features against illumination changes and unique matching and association of features is a sought after capability. A solution for robustness of features against illumination variation is proposed combining Harris corner detection together with moment image representation. Whereas the first provides a technique for efficient feature detection, the moment images add the necessary brightness invariance. Moreover, a bucketing strategy is used to guarantee that features are homogeneously distributed within the images. Then, the addition of local feature descriptors guarantees the unique identification of image cues. In the second part, reliable and precise motion estimation for the Mars’s robot is studied. A number of successful approaches are thoroughly analysed. Visual Simultaneous Localisation And Mapping (VSLAM) is investigated, proposing enhancements and integrating it with the robust feature methodology. Then, linear and nonlinear optimisation techniques are explored. Alternative photogrammetry reprojection concepts are tested. Lastly, data fusion techniques are proposed to deal with the integration of multiple stereo view data. Our robust visual scheme allows good feature repeatability. Because of this, dimensionality reduction of the feature data can be used without compromising the overall performance of the proposed solutions for motion estimation. Also, the developed Egomotion techniques have been extensively validated using both simulated and real data collected at ESA-ESTEC facilities. Multiple stereo view solutions for robot motion estimation are introduced, presenting interesting benefits. The obtained results prove the innovative methods presented here to be accurate and reliable approaches capable to solve the Egomotion problem in a Mars environment

    Multiple Integrated Navigation Sensors for Improving Occupancy Grid FastSLAM

    Get PDF
    An autonomous vehicle must accurately observe its location within the environment to interact with objects and accomplish its mission. When its environment is unknown, the vehicle must construct a map detailing its surroundings while using it to maintain an accurate location. Such a vehicle is faced with the circularly defined Simultaneous Localization and Mapping (SLAM) problem. However difficult, SLAM is a critical component of autonomous vehicle exploration with applications to search and rescue. To current knowledge, this research presents the first SLAM solution to integrate stereo cameras, inertial measurements, and vehicle odometry into a Multiple Integrated Navigation Sensor (MINS) path. The implementation combines the MINS path with LIDAR to observe and map the environment using the FastSLAM algorithm. In real-world tests, a mobile ground vehicle equipped with these sensors completed a 140 meter loop around indoor hallways. This SLAM solution produces a path that closes the loop and remains within 1 meter of truth, reducing the error 92% from an image-inertial navigation system and 79% from odometry FastSLAM
    • …
    corecore