142 research outputs found

    Sparse 3D Point-cloud Map Upsampling and Noise Removal as a vSLAM Post-processing Step: Experimental Evaluation

    Full text link
    The monocular vision-based simultaneous localization and mapping (vSLAM) is one of the most challenging problem in mobile robotics and computer vision. In this work we study the post-processing techniques applied to sparse 3D point-cloud maps, obtained by feature-based vSLAM algorithms. Map post-processing is split into 2 major steps: 1) noise and outlier removal and 2) upsampling. We evaluate different combinations of known algorithms for outlier removing and upsampling on datasets of real indoor and outdoor environments and identify the most promising combination. We further use it to convert a point-cloud map, obtained by the real UAV performing indoor flight to 3D voxel grid (octo-map) potentially suitable for path planning.Comment: 10 pages, 4 figures, camera-ready version of paper for "The 3rd International Conference on Interactive Collaborative Robotics (ICR 2018)

    Vision-based localization methods under GPS-denied conditions

    Full text link
    This paper reviews vision-based localization methods in GPS-denied environments and classifies the mainstream methods into Relative Vision Localization (RVL) and Absolute Vision Localization (AVL). For RVL, we discuss the broad application of optical flow in feature extraction-based Visual Odometry (VO) solutions and introduce advanced optical flow estimation methods. For AVL, we review recent advances in Visual Simultaneous Localization and Mapping (VSLAM) techniques, from optimization-based methods to Extended Kalman Filter (EKF) based methods. We also introduce the application of offline map registration and lane vision detection schemes to achieve Absolute Visual Localization. This paper compares the performance and applications of mainstream methods for visual localization and provides suggestions for future studies.Comment: 32 pages, 15 figure

    Real-Time Implementation of Vision-Aided Monocular Navigation for Small Fixed-Wing Unmanned Aerial Systems

    Get PDF
    The goal of this project was to develop and implement algorithms to demonstrate real-time positioning of a UAV using a monocular camera combined with previously collected orthorectified imagery. Unlike previous tests, this project did not utilize a full inertial navigation system (INS) for attitude, but instead had to rely on the attitude obtained by inexpensive commercial off-the-shelf (COTS) autopilots. The system consisted of primarily COTS components and open-source software, and was own over Camp Atterbury, IN for a sequence of flight tests in Fall 2015. The system obtained valid solutions over much of the flight path, identifying features in the flight image, matching those features with a database of features, and then solving both the 6DOF solution, and an attitude-aided 3DOF solution. The tests demonstrated that such attitude aiding is beneficial, since the horizontal DRMS of the 6DOF solution was 59m, whereas the 3DOF solution DRMS was 15m. Post processing was done to improve the algorithm to correct for system errors, obtaining a 3DOF solution DRMS of 8.22 meters. Overall, this project increased our understanding of the capabilities and limitations of real-time vision-aided navigation, and demonstrated that such navigation is possible on a relatively small platform with limited computational power

    DEEP LEARNING TO SUPPORT 3D MAPPING CAPABILITIES OF A PORTABLE VSLAM-BASED SYSTEM

    Get PDF
    The use of vision-based localization and mapping techniques, such as visual odometry and SLAM, has become increasingly prevalent in the field of Geomatics, particularly in mobile mapping systems. These methods provide real-time estimation of the 3D scene as well as sensor's position and orientation using images or LiDAR sensors mounted on a moving platform. While visual odometry primarily focuses on the camera's position, SLAM also creates a 3D reconstruction of the environment. Conventional (geometric) and learning-based approaches are used in visual SLAM, with deep learning networks being integrated to perform semantic segmentation, object detection and depth prediction. The goal of this work is to report ongoing developments to extend the GuPho stereo-vision SLAM-based system with deep learning networks for tasks such as crack detection, obstacle detection and depth estimation. Our findings show how a neural network can be coupled to SLAM sequences in order to support 3D mapping application with semantic information

    Survey of computer vision algorithms and applications for unmanned aerial vehicles

    Get PDF
    This paper presents a complete review of computer vision algorithms and vision-based intelligent applications, that are developed in the field of the Unmanned Aerial Vehicles (UAVs) in the latest decade. During this time, the evolution of relevant technologies for UAVs; such as component miniaturization, the increase of computational capabilities, and the evolution of computer vision techniques have allowed an important advance in the development of UAVs technologies and applications. Particularly, computer vision technologies integrated in UAVs allow to develop cutting-edge technologies to cope with aerial perception difficulties; such as visual navigation algorithms, obstacle detection and avoidance and aerial decision-making. All these expert technologies have developed a wide spectrum of application for UAVs, beyond the classic military and defense purposes. Unmanned Aerial Vehicles and Computer Vision are common topics in expert systems, so thanks to the recent advances in perception technologies, modern intelligent applications are developed to enhance autonomous UAV positioning, or automatic algorithms to avoid aerial collisions, among others. Then, the presented survey is based on artificial perception applications that represent important advances in the latest years in the expert system field related to the Unmanned Aerial Vehicles. In this paper, the most significant advances in this field are presented, able to solve fundamental technical limitations; such as visual odometry, obstacle detection, mapping and localization, et cetera. Besides, they have been analyzed based on their capabilities and potential utility. Moreover, the applications and UAVs are divided and categorized according to different criteria.This research is supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2013-48314-C3-1-R)

    Robust airborne 3D visual simultaneous localisation and mapping

    Get PDF
    The aim of this thesis is to present robust solutions to technical problems of airborne three-dimensional (3D) Visual Simultaneous Localisation And Mapping (VSLAM). These solutions are developed based on a stereovision system available onboard Unmanned Aerial Vehicles (UAVs). The proposed airborne VSLAM enables unmanned aerial vehicles to construct a reliable map of an unknown environment and localise themselves within this map without any user intervention. Current research challenges related to Airborne VSLAM include the visual processing through invariant feature detectors/descriptors, efficient mapping of large environments and cooperative navigation and mapping of complex environments. Most of these challenges require scalable representations, robust data association algorithms, consistent estimation techniques, and fusion of different sensor modalities. To deal with these challenges, seven Chapters are presented in this thesis as follows: Chapter 1 introduces UAVs, definitions, current challenges and different applications. Next, in Chapter 2 we present the main sensors used by UAVs during navigation. Chapter 3 presents an important task for autonomous navigation which is UAV localisation. In this chapter, some robust and optimal approaches for data fusion are proposed with performance analysis. After that, UAV map building is presented in Chapter 4. This latter is divided into three parts. In the first part, a new imaging alternative technique is proposed to extract and match a suitable number of invariant features. The second part presents an image mosaicing algorithm followed by a super-resolution approach. In the third part, we propose a new feature detector and descriptor that is fast, robust and detect suitable number of features to solve the VSLAM problem. A complete Airborne Visual Simultaneous Localisation and Mapping (VSLAM) solution based on a stereovision system is presented in Chapter (5). Robust data association filters with consistency and observability analysis are presented in this chapter as well. The proposed algorithm is validated with loop closing detection and map management using experimental data. The airborne VSLAM is extended then to the multiple UAVs case in Chapter (6). This chapter presents two architectures of cooperation: a Centralised and a Decentralised. The former provides optimal precision in terms of UAV positions and constructed map while the latter is more suitable for real time and embedded system applications. Finally, conclusions and future works are presented in Chapter (7).EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    From Augmentation to Inpainting:Improving Visual SLAM with Signal Enhancement Techniques and GAN-based Image Inpainting

    Get PDF
    This paper undertakes a comprehensive investigation that surpasses the conventional examination of signal enhancement techniques and their effects on visual Simultaneous Localization and Mapping (vSLAM) performance across diverse scenarios. Going beyond the conventional scope, the study extends its focus towards the seamless integration of signal enhancement techniques, aiming to achieve a substantial enhancement in the overall vSLAM performance. The research not only delves into the assessment of existing methods but also actively contributes to the field by proposing innovative denoising techniques that can play a pivotal role in refining the accuracy and reliability of vSLAM systems. This multifaceted approach encompasses a thorough exploration of the intricate relationships between signal enhancement, denoising strategies, their cumulative impact on the performance of vSLAM in real-world applications and the innovative use of Generative Adversarial Networks (GANs) for image inpainting. The GANs effectively fill in missing spaces following object detection and removal, presenting a novel state-of-the-art approach that significantly enhances overall accuracy and execution speed of vSLAM. This paper aims to contribute to the advancement of vSLAM algorithms in real-world scenarios, demonstrating improved accuracy, robustness, and computational efficiency through the amalgamation of signal enhancement and advanced denoising techniques

    Vision-Based Monocular SLAM in Micro Aerial Vehicle

    Get PDF
    Micro Aerial Vehicles (MAVs) are popular for their efficiency, agility, and lightweights. They can navigate in dynamic environments that cannot be accessed by humans or traditional aircraft. These MAVs rely on GPS and it will be difficult for GPS-denied areas where it is obstructed by buildings and other obstacles.  Simultaneous Localization and Mapping (SLAM) in an unknown environment can solve the aforementioned problems faced by flying robots.  A rotation and scale invariant visual-based solution, oriented fast and rotated brief (ORB-SLAM) is one of the best solutions for localization and mapping using monocular vision.  In this paper, an ORB-SLAM3 has been used to carry out the research on localizing micro-aerial vehicle Tello and mapping an unknown environment.  The effectiveness of ORB-SLAM3 was tested in a variety of indoor environments.   An integrated adaptive controller was used for an autonomous flight that used the 3D map, produced by ORB-SLAM3 and our proposed novel technique for robust initialization of the SLAM system during flight.  The results show that ORB-SLAM3 can provide accurate localization and mapping for flying robots, even in challenging scenarios with fast motion, large camera movements, and dynamic environments.  Furthermore, our results show that the proposed system is capable of navigating and mapping challenging indoor situations
    corecore