44 research outputs found

    Vision-based localization methods under GPS-denied conditions

    Full text link
    This paper reviews vision-based localization methods in GPS-denied environments and classifies the mainstream methods into Relative Vision Localization (RVL) and Absolute Vision Localization (AVL). For RVL, we discuss the broad application of optical flow in feature extraction-based Visual Odometry (VO) solutions and introduce advanced optical flow estimation methods. For AVL, we review recent advances in Visual Simultaneous Localization and Mapping (VSLAM) techniques, from optimization-based methods to Extended Kalman Filter (EKF) based methods. We also introduce the application of offline map registration and lane vision detection schemes to achieve Absolute Visual Localization. This paper compares the performance and applications of mainstream methods for visual localization and provides suggestions for future studies.Comment: 32 pages, 15 figure

    Integrated architecture for vision-based indoor localization and mapping of a quadrotor micro-air vehicle

    Get PDF
    Atualmente os sistemas de pilotagem autónoma de quadricópteros estão a ser desenvolvidos de forma a efetuarem navegação em espaços exteriores, onde o sinal de GPS pode ser utilizado para definir waypoints de navegação, modos de position e altitude hold, returning home, entre outros. Contudo, o problema de navegação autónoma em espaços fechados sem que se utilize um sistema de posicionamento global dentro de uma sala, subsiste como um problema desafiante e sem solução fechada. Grande parte das soluções são baseadas em sensores dispendiosos, como o LIDAR ou como sistemas de posicionamento externos (p.ex. Vicon, Optitrack). Algumas destas soluções reservam a capacidade de processamento de dados dos sensores e dos algoritmos mais exigentes para sistemas de computação exteriores ao veículo, o que também retira a componente de autonomia total que se pretende num veículo com estas características. O objetivo desta tese pretende, assim, a preparação de um sistema aéreo não-tripulado de pequeno porte, nomeadamente um quadricóptero, que integre diferentes módulos que lhe permitam simultânea localização e mapeamento em espaços interiores onde o sinal GPS ´e negado, utilizando, para tal, uma câmara RGB-D, em conjunto com outros sensores internos e externos do quadricóptero, integrados num sistema que processa o posicionamento baseado em visão e com o qual se pretende que efectue, num futuro próximo, planeamento de movimento para navegação. O resultado deste trabalho foi uma arquitetura integrada para análise de módulos de localização, mapeamento e navegação, baseada em hardware aberto e barato e frameworks state-of-the-art disponíveis em código aberto. Foi também possível testar parcialmente alguns módulos de localização, sob certas condições de ensaio e certos parâmetros dos algoritmos. A capacidade de mapeamento da framework também foi testada e aprovada. A framework obtida encontra-se pronta para navegação, necessitando apenas de alguns ajustes e testes.Nowdays, the existing systems for autonomous quadrotor control are being developed in order to perform navigation in outdoor areas where the GPS signal can be used to define navigational waypoints and define flight modes like position and altitude hold, returning home, among others. However, the problem of autonomous navigation in closed areas, without using a global positioning system inside a room, remains a challenging problem with no closed solution. Most solutions are based on expensive sensors such as LIDAR or external positioning (f.e. Vicon, Optitrack) systems. Some of these solutions allow the capability of processing data from sensors and algorithms for external systems, which removes the intended fully autonomous component in a vehicle with such features. Thus, this thesis aims at preparing a small unmanned aircraft system, more specifically, a quadrotor, that integrates different modules which will allow simultaneous indoor localization and mapping where GPS signal is denied, using for such a RGB-D camera, in conjunction with other internal and external quadrotor sensors, integrated into a system that processes vision-based positioning and it is intended to carry out, in the near future, motion planning for navigation. The result of this thesis was an integrated architecture for testing localization, mapping and navigation modules, based on open-source and inexpensive hardware and available state-of-the-art frameworks. It was also possible to partially test some localization frameworks, under certain test conditions and algorithm parameters. The mapping capability of the framework was also tested and approved. The obtained framework is navigation ready, needing only some adjustments and testing

    Vision Based Collaborative Localization and Path Planning for Micro Aerial Vehicles

    Get PDF
    Autonomous micro aerial vehicles (MAV) have gained immense popularity in both the commercial and research worlds over the last few years. Due to their small size and agility, MAVs are considered to have great potential for civil and industrial tasks such as photography, search and rescue, exploration, inspection and surveillance. Autonomy on MAVs usually involves solving the major problems of localization and path planning. While GPS is a popular choice for localization for many MAV platforms today, it suffers from issues such as inaccurate estimation around large structures, and complete unavailability in remote areas/indoor scenarios. From the alternative sensing mechanisms, cameras arise as an attractive choice to be an onboard sensor due to the richness of information captured, along with small size and inexpensiveness. Another consideration that comes into picture for micro aerial vehicles is the fact that these small platforms suffer from inability to fly for long amounts of time or carry heavy payload, scenarios that can be solved by allocating a group, or a swarm of MAVs to perform a task than just one. Collaboration between multiple vehicles allows for better accuracy of estimation, task distribution and mission efficiency. Combining these rationales, this dissertation presents collaborative vision based localization and path planning frameworks. Although these were created as two separate steps, the ideal application would contain both of them as a loosely coupled localization and planning algorithm. A forward-facing monocular camera onboard each MAV is considered as the sole sensor for computing pose estimates. With this minimal setup, this dissertation first investigates methods to perform feature-based localization, with the possibility of fusing two types of localization data: one that is computed onboard each MAV, and the other that comes from relative measurements between the vehicles. Feature based methods were preferred over direct methods for vision because of the relative ease with which tangible data packets can be transferred between vehicles, and because feature data allows for minimal data transfer compared to large images. Inspired by techniques from multiple view geometry and structure from motion, this localization algorithm presents a decentralized full 6-degree of freedom pose estimation method complete with a consistent fusion methodology to obtain robust estimates only at discrete instants, thus not requiring constant communication between vehicles. This method was validated on image data obtained from high fidelity simulations as well as real life MAV tests. These vision based collaborative constraints were also applied to the problem of path planning with a focus on performing uncertainty-aware planning, where the algorithm is responsible for generating not only a valid, collision-free path, but also making sure that this path allows for successful localization throughout. As joint multi-robot planning can be a computationally intractable problem, planning was divided into two steps from a vision-aware perspective. As the first step for improving localization performance is having access to a better map of features, a next-best-multi-view algorithm was developed which can compute the best viewpoints for multiple vehicles that can improve an existing sparse reconstruction. This algorithm contains a cost function containing vision-based heuristics that determines the quality of expected images from any set of viewpoints; which is minimized through an efficient evolutionary strategy known as Covariance Matrix Adaption (CMA-ES) that can handle very high dimensional sample spaces. In the second step, a sampling based planner called Vision-Aware RRT* (VA-RRT*) was developed which includes similar vision heuristics in an information gain based framework in order to drive individual vehicles towards areas that can benefit feature tracking and thus localization. Both steps of the planning framework were tested and validated using results from simulation

    Efficient Visual SLAM for Autonomous Aerial Vehicles

    Get PDF
    The general interest in autonomous or semi-autonomous micro aerial vehicles (MAVs) is strongly increasing. There are already several commercial applications for autonomous micro aerial vehicles and many more being investigated by both research institutes and multiple financially strong companies. Most commercially available applications, however, are rather limited in their autonomy: They rely either on a human operator or reliable reception of global positioning system (GPS) signals for navigation. Truly autonomous micro aerial vehicles that can also fly in GPS-denied environments such as indoors, in forests, or in urban scenarios, where the GPS signal may be blocked by tall buildings, clearly require more on-board sensing and computation potential. In this dissertation, we explore autonomous micro aerial vehicles that rely on a so-called RGBD camera as their main sensor for simultaneous localization and mapping (SLAM). Several aspects of efficient visual SLAM with RGBD cameras aimed at micro aerial vehicles are studied in detail within this dissertation: We first propose a novel principle of integrating depth measurements within visual SLAM systems by combining both 2D image position and depth measurements. We modify a widely-used visual odometry system accordingly, such that it can serve as a robust and accurate odometry system for RGBD cameras. Based on this principle we go on and implement a full RGBD SLAM system that can close loops and perform global pose graph optimization and runs in real-time on the computationally constrained onboard computer of our MAV. We investigate the feasibility of explicitly detecting loops using depth images as opposed to intensity images with a state of the art hierarchical bag of words (BoW) approach using depth image features. Since an MAV flying indoors can often see a clearly distinguishable ground plane, we develop a novel efficient and accurate ground plane detection method and show how to use this to suppress drift in height and attitude. Finally, we create a full SLAM system combining the earlier ideas that enables our MAV to fly autonomously in previously unknown environments while creating a map of its surroundings

    An efficient RANSAC hypothesis evaluation using sufficient statistics for RGB-D pose estimation

    Get PDF
    Achieving autonomous flight in GPS-denied environments begins with pose estimation in three-dimensional space, and this is much more challenging in an MAV in a swarm robotic system due to limited computational resources. In vision-based pose estimation, outlier detection is the most time-consuming step. This usually involves a RANSAC procedure using the reprojection-error method for hypothesis evaluation. Realignment-based hypothesis evaluation method is observed to be more accurate, but the considerably slower speed makes it unsuitable for robots with limited resources. We use sufficient statistics of least-squares minimisation to speed up this process. The additive nature of these sufficient statistics makes it possible to compute pose estimates in each evaluation by reusing previously computed statistics. Thus estimates need not be calculated from scratch each time. The proposed method is tested on standard RANSAC, Preemptive RANSAC and R-RANSAC using benchmark datasets. The results show that the use of sufficient statistics speeds up the outlier detection process with realignment hypothesis evaluation for all RANSAC variants, achieving an execution speed of up to 6.72 times

    Survey of computer vision algorithms and applications for unmanned aerial vehicles

    Get PDF
    This paper presents a complete review of computer vision algorithms and vision-based intelligent applications, that are developed in the field of the Unmanned Aerial Vehicles (UAVs) in the latest decade. During this time, the evolution of relevant technologies for UAVs; such as component miniaturization, the increase of computational capabilities, and the evolution of computer vision techniques have allowed an important advance in the development of UAVs technologies and applications. Particularly, computer vision technologies integrated in UAVs allow to develop cutting-edge technologies to cope with aerial perception difficulties; such as visual navigation algorithms, obstacle detection and avoidance and aerial decision-making. All these expert technologies have developed a wide spectrum of application for UAVs, beyond the classic military and defense purposes. Unmanned Aerial Vehicles and Computer Vision are common topics in expert systems, so thanks to the recent advances in perception technologies, modern intelligent applications are developed to enhance autonomous UAV positioning, or automatic algorithms to avoid aerial collisions, among others. Then, the presented survey is based on artificial perception applications that represent important advances in the latest years in the expert system field related to the Unmanned Aerial Vehicles. In this paper, the most significant advances in this field are presented, able to solve fundamental technical limitations; such as visual odometry, obstacle detection, mapping and localization, et cetera. Besides, they have been analyzed based on their capabilities and potential utility. Moreover, the applications and UAVs are divided and categorized according to different criteria.This research is supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2013-48314-C3-1-R)

    Comparison of Visual Simultaneous Localization and Mapping Methods for Fixed-Wing Aircraft Using SLAMBench2

    Get PDF
    Visual Simultaneous Localization and Mapping (VSLAM) algorithms have evolved rapidly in the last few years, however there has been little research evaluating current algorithm\u27s effectiveness and limitations when applied to tracking the position of a fixed-wing aerial vehicle. This research looks to evaluate current monocular VSLAM algorithms\u27 performance on aerial vehicle datasets using the SLAMBench2 benchmarking suite. The algorithms tested are MonoSLAM, PTAM, OKVIS, LSDSLAM, ORB-SLAM2, and SVO, all of which are built into the SLAMBench2 software. The algorithms\u27 performance is evaluated using simulated datasets generated in the AftrBurner Engine. The datasets were designed to test the quality of each algorithm\u27s tracking solution, as well as finding any dependence on camera field of view (FOV), aircraft altitude, bank angle, and bank rate. Through these tests, it was found that LSDSLAM, ORB-SLAM2, and SVO are good candidates for further research, with MonoSLAM, PTAM, and OKVIS failing to track any datasets. All algorithms were found to fail when the capturing camera had a horizontal FOV of less than 60 degrees, with peak performance occurring at a FOV of 75 degrees or above. LSDSLAM was found to fail when the aircraft bank angle exceeded half of the camera\u27s FOV, and SVO was found to fail below 450 meters altitude. The simulations were also tested against a comparable real world dataset, with agreeable results, although the FOV of the real world dataset was too small to be a particularly useful test. Further research is required to determine the applicability of these results to the real world, as well as fuse VSLAM algorithms with other sensors and solutions to form a more robust navigation solution

    SISTEM PENAPISAN DERAU PADA SENSOR INERSIA WAHANA TANPA AWAK QUADROTOR

    Get PDF
    Dalam penelitian ini dirancang algoritme penapisan yang bertujuan untuk mengurangi derau yang ada pada hasil pengukuran sensor inersia quadrotor. Kalman filter digunakan untuk menapis derau yang tercampur pada data hasil pengukuran sensor accelerometer dan gyroscope. Selain itu, algoritme zero velocity compensator dirancang untuk menghilangkan pergeseran ketika quadrotor berada dalam keadaan statis. Berdasarkan pengujian yang telah dilakukan algoritme zero velocity compensator yang dirancang telah mampu mengurangi pergeseran (drift) pada saat quadrotor dalam keadaaan diam, selain itu Kalman filter yang digunakan pada sensor accelerometer dan sensor gyroscope telah dapat mengurangi derau yang tercampur pada raw data, sehingga hasil integrasi perpindahan lebih baik dibandingkan dengan hasil integrasi tanpa penapisan. Kata kunci: kalman filter, zero velocity compensator, IMU

    Robot Localization Obtained by Using Inertial Measurements, Computer Vision, and Wireless Ranging

    Get PDF
    Robots have long been used for completing tasks that are too difficult, dangerous, or distant to be accomplished by humans. In many cases, these robots are highly specialized platforms - often expensive and capable of completing every task related to a mission\u27s objective. An alternative approach is to use multiple platforms, each less capable in terms of number of tasks and thus significantly less complex and less costly. With advancements in embedded computing and wireless communications, multiple such platforms have been shown to work together to accomplish mission objectives. In the extreme, collections of very simple robots have demonstrated emergent behavior akin to that seen in nature (e.g., bee colonies) motivating the moniker of \u27\u27swarm robotics\u27\u27 - a group of robots working collaboratively to accomplish a task. The use of robotic swarms offers the potential to solve complex tasks more efficiently than a single robot by introducing robustness and flexibility to the system. This work investigates localization in heterogeneous and autonomous robotic swarms to improve their ability to carry out exploratory missions in unknown terrain. Collaboratively, these robots can, for example, conduct sensing and mapping of an environment while simultaneously evolving a communication network. For this application, among many others, it is required to determine an accurate knowledge of the robot\u27s pose (i.e., position and orientation). The act of determining the pose of the robot is known as localization. Some low cost robots can provide location estimates using inertial measurements (i.e., odometry), however this method alone is insufficient due to cumulative errors in sensing. Image tracking and wireless localization methods are implemented in this work to increase the accuracy of localization estimates. These localization methods complement each other: image tracking yields higher accuracy than wireless, however a line-of-sight (LOS) with the target is required; wireless localization can operate under LOS or non-LOS conditions, however has issues in multipath conditions. Together, these methods can be used to improve localization results under all sight conditions. The specific contributions of this work are: (1) a concept of \u27shared sensing\u27 in which extremely simple and inexpensive robots with unreliable localization estimates are used in a heterogeneous swarm of robots in a way that increases the accuracy of localization for the simple agents and simultaneously extends the sensing capabilities of the more complex robots, (2) a description, evaluation, and discussion of various means to estimate a robot\u27s pose, (3) a method for increasing reliability of RSSI measurements for wireless ranging/localization systems by averaging RSSI measurements over both time and space, (4) a process for developing an in-field model to be used for estimating the location of a robot by leveraging the existing wireless communication system
    corecore