41 research outputs found

    Unified Inverse Depth Parametrization for Monocular SLAM

    No full text

    RT-SLAM: A Generic and Real-Time Visual SLAM Implementation

    Full text link
    This article presents a new open-source C++ implementation to solve the SLAM problem, which is focused on genericity, versatility and high execution speed. It is based on an original object oriented architecture, that allows the combination of numerous sensors and landmark types, and the integration of various approaches proposed in the literature. The system capacities are illustrated by the presentation of an inertial/vision SLAM approach, for which several improvements over existing methods have been introduced, and that copes with very high dynamic motions. Results with a hand-held camera are presented.Comment: 10 page

    Delayed inverse depth monocular SLAM

    Get PDF
    The 6-DOF monocular camera case possibly represents the harder variant in the context of simultaneous localization and mapping problem. In the last years, several advances have been appeared in this area; however the application of these techniques to real world applications it’s difficult so far. Recently, the unified inverse depth parametrization has shown to be a good option this challenging problem, in a scheme of EKF for the estimation of the stochastic map and camera pose. In this paper a new delayed initialization scheme is proposed for adding new features to the stochastic map. The results show that delayed initialization can improve some aspects without losing the performance and unified aspect of the original method, when initial reference points are used in order to fix a metric scale in the map.Postprint (published version

    Visual 3-D SLAM from UAVs

    Get PDF
    The aim of the paper is to present, test and discuss the implementation of Visual SLAM techniques to images taken from Unmanned Aerial Vehicles (UAVs) outdoors, in partially structured environments. Every issue of the whole process is discussed in order to obtain more accurate localization and mapping from UAVs flights. Firstly, the issues related to the visual features of objects in the scene, their distance to the UAV, and the related image acquisition system and their calibration are evaluated for improving the whole process. Other important, considered issues are related to the image processing techniques, such as interest point detection, the matching procedure and the scaling factor. The whole system has been tested using the COLIBRI mini UAV in partially structured environments. The results that have been obtained for localization, tested against the GPS information of the flights, show that Visual SLAM delivers reliable localization and mapping that makes it suitable for some outdoors applications when flying UAVs

    Closing loops with a virtual sensor based on monocular SLAM

    Get PDF
    Monocular simultaneous localization and mapping(SLAM) techniques implicitly estimate camera ego-motion while incrementally building a map of the environment. In monocular SLAM, when the number of features in the system state increases, maintaining a real-time operation becomes very difficult. However, it is easy to remove old features from the state to maintain a stable computational cost per frame. If features are removed from the map, then previously mapped areas cannot be recognized to minimize the robot’s drift; alternatively, in the context of a real-time virtual sensor that emulates typical sensors as laser for range measurements and encoders for dead reckoning, this limitation should not be a problem. In this paper, a novel framework is proposed to build in real time a consistent map of the environment using the virtual-sensor estimations. At the same time, the proposed approach allows minimizing the drift of the camera-robot position. Experiments with real data are presented to show the performance of this frame of work.Peer ReviewedPostprint (published version

    Cooperative monocular-based SLAM for multi-UAV systems in GPS-denied environments

    Get PDF
    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.Peer ReviewedPostprint (published version

    Inverse Depth to Depth Conversion for Monocular SLAM

    Full text link

    Vision-based SLAM system for MAVs in GPS-denied environments

    Get PDF
    Using a camera, a micro aerial vehicle (MAV) can perform visual-based navigation in periods or circumstances when GPS is not available, or when it is partially available. In this context, the monocular simultaneous localization and mapping (SLAM) methods represent an excellent alternative, due to several limitations regarding to the design of the platform, mobility and payload capacity that impose considerable restrictions on the available computational and sensing resources of the MAV. However, the use of monocular vision introduces some technical difficulties as the impossibility of directly recovering the metric scale of the world. In this work, a novel monocular SLAM system with application to MAVs is proposed. The sensory input is taken from a monocular downward facing camera, an ultrasonic range finder and a barometer. The proposed method is based on the theoretical findings obtained from an observability analysis. Experimental results with real data confirm those theoretical findings and show that the proposed method is capable of providing good results with low-cost hardware.Peer ReviewedPostprint (published version
    corecore