37 research outputs found

    Towards bio-inspired unsupervised representation learning for indoor aerial navigation

    Full text link
    Aerial navigation in GPS-denied, indoor environments, is still an open challenge. Drones can perceive the environment from a richer set of viewpoints, while having more stringent compute and energy constraints than other autonomous platforms. To tackle that problem, this research displays a biologically inspired deep-learning algorithm for simultaneous localization and mapping (SLAM) and its application in a drone navigation system. We propose an unsupervised representation learning method that yields low-dimensional latent state descriptors, that mitigates the sensitivity to perceptual aliasing, and works on power-efficient, embedded hardware. The designed algorithm is evaluated on a dataset collected in an indoor warehouse environment, and initial results show the feasibility for robust indoor aerial navigation

    Simulation Framework for Mobile Robots in Planetary-Like Environments

    Full text link
    In this paper we present a simulation framework for the evaluation of the navigation and localization metrological performances of a robotic platform. The simulator, based on ROS (Robot Operating System) Gazebo, is targeted to a planetary-like research vehicle which allows to test various perception and navigation approaches for specific environment conditions. The possibility of simulating arbitrary sensor setups comprising cameras, LiDARs (Light Detection and Ranging) and IMUs makes Gazebo an excellent resource for rapid prototyping. In this work we evaluate a variety of open-source visual and LiDAR SLAM (Simultaneous Localization and Mapping) algorithms in a simulated Martian environment. Datasets are captured by driving the rover and recording sensors outputs as well as the ground truth for a precise performance evaluation.Comment: To be presented at the 7th IEEE International Workshop on Metrology for Aerospace (MetroAerospace

    LiDAR-Based 3D SLAM for Indoor Mapping

    Get PDF
    Aiming to develop methods for real-time 3D scanning of building interiors, this work evaluates the performance of state-of-the-art LiDAR-based approaches for 3D simultaneous localisation and mapping (SLAM) in indoor environments. A simulation framework using ROS and Gazebo has been implemented to compare different methods based on LiDAR odometry and mapping (LOAM). The featureless environments typically found in interiors of commercial and industrial buildings pose significant challenges for LiDAR-based SLAM frameworks, resulting in drift or breakdown of the processes. The results from this paper provide performance criteria for indoor SLAM applications, comparing different room topologies and levels of clutter. The modular nature of the simulation environment provides a framework for future SLAM development and benchmarking specific to indoor environments

    Multi-Session Visual SLAM for Illumination Invariant Localization in Indoor Environments

    Full text link
    For robots navigating using only a camera, illumination changes in indoor environments can cause localization failures during autonomous navigation. In this paper, we present a multi-session visual SLAM approach to create a map made of multiple variations of the same locations in different illumination conditions. The multi-session map can then be used at any hour of the day for improved localization capability. The approach presented is independent of the visual features used, and this is demonstrated by comparing localization performance between multi-session maps created using the RTAB-Map library with SURF, SIFT, BRIEF, FREAK, BRISK, KAZE, DAISY and SuperPoint visual features. The approach is tested on six mapping and six localization sessions recorded at 30 minutes intervals during sunset using a Google Tango phone in a real apartment.Comment: 6 pages, 5 figure
    corecore