16 research outputs found

    Monocular Parallel Tracking and Mapping with Odometry Fusion for MAV Navigation in Feature-Lacking Environments

    Get PDF
    ©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Presented at the IEEE/RSJ International Workshop on Vision-based Closed-Loop Control and Navigation of Micro Helicopters in GPS-denied Environments (IROS 2013), November 7, 2013, Tokyo, Japan.Despite recent progress, autonomous navigation on Micro Aerial Vehicles with a single frontal camera is still a challenging problem, especially in feature-lacking environ- ments. On a mobile robot with a frontal camera, monoSLAM can fail when there are not enough visual features in the scene, or when the robot, with rotationally dominant motions, yaws away from a known map toward unknown regions. To overcome such limitations and increase responsiveness, we present a novel parallel tracking and mapping framework that is suitable for robot navigation by fusing visual data with odometry measurements in a principled manner. Our framework can cope with a lack of visual features in the scene, and maintain robustness during pure camera rotations. We demonstrate our results on a dataset captured from the frontal camera of a quad- rotor flying in a typical feature-lacking indoor environment

    Multi-level mapping: Real-time dense monocular SLAM

    Get PDF
    We present a method for Simultaneous Localization and Mapping (SLAM) using a monocular camera that is capable of reconstructing dense 3D geometry online without the aid of a graphics processing unit (GPU). Our key contribution is a multi-resolution depth estimation and spatial smoothing process that exploits the correlation between low-texture image regions and simple planar structure to adaptively scale the complexity of the generated keyframe depthmaps to the texture of the input imagery. High-texture image regions are represented at higher resolutions to capture fine detail, while low-texture regions are represented at coarser resolutions for smooth surfaces. The computational savings enabled by this approach allow for significantly increased reconstruction density and quality when compared to the state-of-the-art. The increased depthmap density also improves tracking performance as more constraints can contribute to the pose estimation. A video of experimental results is available at http://groups.csail.mit.edu/rrg/multi_level_mapping.Charles Stark Draper Laboratory (Research Fellowship

    Search and Rescue under the Forest Canopy using Multiple UAVs

    Full text link
    We present a multi-robot system for GPS-denied search and rescue under the forest canopy. Forests are particularly challenging environments for collaborative exploration and mapping, in large part due to the existence of severe perceptual aliasing which hinders reliable loop closure detection for mutual localization and map fusion. Our proposed system features unmanned aerial vehicles (UAVs) that perform onboard sensing, estimation, and planning. When communication is available, each UAV transmits compressed tree-based submaps to a central ground station for collaborative simultaneous localization and mapping (CSLAM). To overcome high measurement noise and perceptual aliasing, we use the local configuration of a group of trees as a distinctive feature for robust loop closure detection. Furthermore, we propose a novel procedure based on cycle consistent multiway matching to recover from incorrect pairwise data associations. The returned global data association is guaranteed to be cycle consistent, and is shown to improve both precision and recall compared to the input pairwise associations. The proposed multi-UAV system is validated both in simulation and during real-world collaborative exploration missions at NASA Langley Research Center.Comment: IJRR revisio

    VoluMon: Weakly-Supervised Volumetric Monocular Estimation with Ellipsoid Representations

    No full text

    Hierarchical Object Map Estimation for Efficient and Robust Navigation

    No full text

    Vistas and Wall-Floor Intersection Features: Enabling Autonomous Flight in Man-made Environments

    Get PDF
    ©2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Presented at the 2nd Workshop on Visual Control of Mobile Robots (ViCoMoR): IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2012), 7-12 October 2012, Vilamoura, Algarve, Portugal.We propose a solution toward the problem of autonomous flight and exploration in man-made indoor environments with a micro aerial vehicle (MAV), using a frontal camera, a downward-facing sonar, and an IMU. We present a general method to detect and steer an MAV toward distant features that we call vistas while building a map of the environment to detect unexplored regions. Our method enables autonomous exploration capabilities while working reliably in textureless indoor environments that are challenging for traditional monocular SLAM approaches. We overcome the difficulties faced by traditional approaches with Wall-Floor Intersection Features , a novel type of low-dimensional landmarks that are specifically designed for man-made environments to capture the geometric structure of the scene. We demonstrate our results on a small, commercially available quadrotor platform

    Simultaneous tracking and rendering: Real-time monocular localization for MAVs

    No full text
    We propose a method of real-time monocular camera-based localization in known environments. With the goal of controlling high-speed micro air vehicles (MAVs), we localize with respect to a mesh map of the environment that can support both pose estimation and trajectory planning. Using only limited hardware that can be carried on a MAV, we achieve accurate pose estimation at rates above 50 Hz, an order of magnitude faster than the current state-of-the-art meshbased localization algorithms. In our simultaneous tracking and rendering (STAR) approach, we render virtual images of the environment and track camera images with respect to them using a robust semi-direct image alignment technique. Our main contribution is the decoupling of camera tracking from virtual image rendering, which drastically reduces the number of rendered images and enables accurate full camera-rate tracking without needing a high-end GPU. We demonstrate our approach in GPS-denied indoor environments.United States. Office of Naval Research. Multidisciplinary University Research Initiative (Grant N00014-10-1-0936)Micro Autonomous Consortium Systems and Technolog

    Attitude Heading Reference System with Rotation-Aiding Visual Landmarks

    Get PDF
    ©2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 15th International Conference on Information Fusion (FUSION 2012), 9-12 July 2012, Singapore.In this paper we present a novel vision-aided attitude heading reference system for micro aerial vehicles (MAVs) and other mobile platforms, which does not rely on known landmark locations or full 3D map estimation as is common in the literature. Inertial sensors which are commonly found on MAVs suffer from additive biases and noise, and yaw error will grow without bounds. The bearing-only measurements, which we call vistas, aid the vehicle’s heading estimate and allow for long-term operation while correcting for sensor drift. Our method is experimentally validated on a commercially available low-cost quadrotor MAV

    Deep Inference for Covariance Estimation: Learning Gaussian Noise Models for State Estimation

    No full text
    We present a novel method of measurement covariance estimation that models measurement uncertainty as a function of the measurement itself. Existing work in predictive sensor modeling outperforms conventional fixed models, but requires domain knowledge of the sensors that heavily influences the accuracy and the computational cost of the models. In this work, we introduce Deep Inference for Covariance Estimation (DICE), which utilizes a deep neural network to predict the covariance of a sensor measurement from raw sensor data. We show that given pairs of raw sensor measurement and ground-truth measurement error, we can learn a representation of the measurement model via supervised regression on the prediction performance of the model, eliminating the need for hand-coded features and parametric forms. Our approach is sensor-agnostic, and we demonstrate improved covariance prediction on both simulated and real data. Keywords: robot sensing systems; measurement uncertainty; measurement errors; covariance matrices; predictive models; estimation; neural networksUnited States. National Aeronautics and Space Administration (Award NNX15AQ50A)United States. Defense Advanced Research Projects Agency (Contract HR0011-15-C-0110
    corecore