83 research outputs found

    Homography-Based State Estimation for Autonomous Exploration in Unknown Environments

    Get PDF
    This thesis presents the development of vision-based state estimation algorithms to enable a quadcopter UAV to navigate and explore a previously unknown GPS denied environment. These state estimation algorithms are based on tracked Speeded-Up Robust Features (SURF) points and the homography relationship that relates the camera motion to the locations of tracked planar feature points in the image plane. An extended Kalman filter implementation is developed to perform sensor fusion using measurements from an onboard inertial measurement unit (accelerometers and rate gyros) with vision-based measurements derived from the homography relationship. Therefore, the measurement update in the filter requires the processing of images from a monocular camera to detect and track planar feature points followed by the computation of homography parameters. The state estimation algorithms are designed to be independent of GPS since GPS can be unreliable or unavailable in many operational environments of interest such as urban environments. The state estimation algorithms are implemented using simulated data from a quadcopter UAV and then tested using post processed video and IMU data from flights of an autonomous quadcopter. The homography-based state estimation algorithm was effective, but accumulates drift errors over time due to the relativistic homography measurement of position

    Vision-Aided Navigation using Tracked Lankmarks

    Get PDF
    This thesis presents vision-based state estimation algorithms for autonomous vehicles to navigate within GPS-denied environments. To accomplish this objective, an approach is developed that utilizes a priori information about the environment. In particular, the algorithm leverages recognizable ‘landmarks’ in the environment, the positions of which are known in advance, to stabilize the state estimate. Measurements of the position of one or more landmarks in the image plane of a monocular camera are then filtered using an extended Kalman filter (EKF) with data from a traditional inertial measurement unit (IMU) consisting of accelerometers and rate gyros to produce the state estimate. Additionally, the EKF algorithm is adapted to accommodate a stereo camera configuration to measure the distance to a landmark using parallax. The performances of the state estimation algorithms for both the monocular and stereo camera configurations are tested and compared using simulation studies with a quadcopter UAV model. State estimation results are then presented using flight data from a quadcopter UAV instrumented with an IMU and a GoPro camera. It is shown that the proposed landmark navigation method is capable of preventing IMU drift errors by providing a GPS-like measurement when landmarks can be identified. Additionally, the landmark method pairs well with non a priori measurements for interims when landmarks are not available

    Aerial Semantic Mapping for Precision Agriculture using Multispectral Imagery

    Get PDF
    Nowadays constant technological evolution cover several necessities and daily tasks in our society. In particular, drones usage, given its wide vision to capture the terrain surface images, allows to collect large amounts of information with high efficiency, performance and accuracy. This master dissertation’s main purpose is the analysis, classification and respective mapping of different terrain types and characteristics, using multispectral imagery. Solar radiation flow reflected on the surface is captured by the used multispectral camera’s different lenses (RedEdge-M, created by Micasense). Each one of these five lenses is able to capture different colour spectrums (i.e. Blue, Green, Red, Near-Infrared and RedEdge). It is possible to analyse the various spectrum indices from the collected imagery, according to the fusion of different combinations between coloured bands (e.g. NDVI, ENDVI, RDVI. . . ). This project engages a ROS (Robot Operating System) framework development, capable of correcting different captured imagery and, hence, calculating the implemented spectral indices. Several parametrizations of terrain analysis were carried throughout the project, and this information was represented in semantic maps by layers (e.g. vegetation, water, soil, rocks). The obtained experimental results were validated in the scope of several projects incorporated in PDR2020, with success rates between 70% and 90%. This framework can have multiple technical applications, not only in Precision Agriculture, but also in vehicles autonomous navigation and multi-robot cooperation

    Точность определения координат беспилотного летательного аппарата с навигационным комплексом, включающим оптико-электронную систему позиционирования

    Get PDF
    The article proposes the approaches to updating a strapdown inertial navigation system (SINS) based on data of the airborne electro-optical system (EOS) of an unmanned aerial vehicle (UAV). It is specified that the EOS is presented as a navigation data sensor. The rationale for the feasibility of such an approach is formed, especially in the terms of signal lack or suppression of satellite radio-navigation systems. It is proposed to ensure the accuracy of self-contained navigation by assigning an UAV route, including waypoints with terrestrial references (TRs). Notably, TR-associated image information is preliminarily downloaded into the flight management computer (FMC). The automated TR identification system with denoted coordinates at next waypoints, using airborne data, in fact, allows for alternative global positioning. The reliable operation of such an integrated navigation system over sufficiently extended legs of flight path, first, depends on the accuracy of its constituent elements. Taking into consideration the fact that conventional sensors of navigation information, such as a SINS and an altimeter, are quite well studied in numerous contributions. The article focuses on the UAV airborne electro-optical system and, specifically, on its application features as a navigation sensor. The factors influencing the accuracy of the UAV positioning data determination at waypoints according to the data of the airborne EOS are considered. The developed mathematical model of errors for the UAV inertial optical navigation complex (IONC) is presented. The analysis of the impact of airborne altimeter inaccuracies, earth’s surface features and the shift of the onboard digital camera optical axis, caused by random evolutions of the carrier body in turbulent atmosphere on the positioning accuracy, was conducted. The results of calculating lapses in determining the UAV positioning data, equipped with IONC, are given.В статье предложены подходы к коррекции бесплатформенной инерциальной навигационной системы по информации от бортовой оптико-электронной системы беспилотного летательного аппарата. При этом оптико-электронная система представлена как датчик навигационной информации. Приводится обоснование целесообразности такого подхода, особенно в условиях отсутствия или подавления сигналов спутниковых радионавигационных систем. Точность автономной навигации предлагается обеспечить за счет организации маршрута беспилотного летательного аппарата, включающего промежуточные пункты маршрута с размещенными в них наземными навигационными ориентирами. При этом видовая информация, связанная с наземными навигационными ориентирами, заранее записана в память бортового компьютера. Система автоматической идентификации наземных навигационных ориентиров с известными координатами в очередных промежуточных пунктах маршрута с использованием имеющихся на борту данных, по сути, обеспечивает возможность альтернативного глобального позиционирования. Правильное функционирование такой комплексной навигационной системы на достаточно продолжительных участках траектории полета прежде всего зависит от точности входящих в нее элементов. С учетом того что классические датчики навигационной информации, такие как бесплатформенная инерциальная навигационная система и высотомер, достаточно хорошо исследованы в многочисленных научных публикациях, основное внимание в статье уделено бортовой оптико-электронной системе беспилотного летательного аппарата, в частности особенностям ее применения в качестве навигационного датчика. Рассмотрены факторы, влияющие на точность определения координат беспилотного летательного аппарата в промежуточных пунктах маршрута по данным бортовой оптико-электронной системы. Представлена разработанная математическая модель ошибок инерциально-оптического навигационного комплекса беспилотного летательного аппарата. Проведен анализ влияния погрешностей бортового высотомера, характеристик рельефа подстилающей местности и смещения оптической оси бортовой цифровой камеры, вызванного случайными эволюциями корпуса носителя в турбулентной атмосфере, на точность позиционирования. Приведены результаты расчета погрешностей определения координат беспилотного летательного аппарата, оснащенного инерциально-оптическим навигационным комплексом

    Vision-Based navigation system for unmanned aerial vehicles

    Get PDF
    Mención Internacional en el título de doctorThe main objective of this dissertation is to provide Unmanned Aerial Vehicles (UAVs) with a robust navigation system; in order to allow the UAVs to perform complex tasks autonomously and in real-time. The proposed algorithms deal with solving the navigation problem for outdoor as well as indoor environments, mainly based on visual information that is captured by monocular cameras. In addition, this dissertation presents the advantages of using the visual sensors as the main source of data, or complementing other sensors in providing useful information; in order to improve the accuracy and the robustness of the sensing purposes. The dissertation mainly covers several research topics based on computer vision techniques: (I) Pose Estimation, to provide a solution for estimating the 6D pose of the UAV. This algorithm is based on the combination of SIFT detector and FREAK descriptor; which maintains the performance of the feature points matching and decreases the computational time. Thereafter, the pose estimation problem is solved based on the decomposition of the world-to-frame and frame-to-frame homographies. (II) Obstacle Detection and Collision Avoidance, in which, the UAV is able to sense and detect the frontal obstacles that are situated in its path. The detection algorithm mimics the human behaviors for detecting the approaching obstacles; by analyzing the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. Then, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, the algorithm extracts the collision-free zones around the obstacle, and combining with the tracked waypoints, the UAV performs the avoidance maneuver. (III) Navigation Guidance, which generates the waypoints to determine the flight path based on environment and the situated obstacles. Then provide a strategy to follow the path segments and in an efficient way and perform the flight maneuver smoothly. (IV) Visual Servoing, to offer different control solutions (Fuzzy Logic Control (FLC) and PID), based on the obtained visual information; in order to achieve the flight stability as well as to perform the correct maneuver; to avoid the possible collisions and track the waypoints. All the proposed algorithms have been verified with real flights in both indoor and outdoor environments, taking into consideration the visual conditions; such as illumination and textures. The obtained results have been validated against other systems; such as VICON motion capture system, DGPS in the case of pose estimate algorithm. In addition, the proposed algorithms have been compared with several previous works in the state of the art, and are results proves the improvement in the accuracy and the robustness of the proposed algorithms. Finally, this dissertation concludes that the visual sensors have the advantages of lightweight and low consumption and provide reliable information, which is considered as a powerful tool in the navigation systems to increase the autonomy of the UAVs for real-world applications.El objetivo principal de esta tesis es proporcionar Vehiculos Aereos no Tripulados (UAVs) con un sistema de navegacion robusto, para permitir a los UAVs realizar tareas complejas de forma autonoma y en tiempo real. Los algoritmos propuestos tratan de resolver problemas de la navegacion tanto en ambientes interiores como al aire libre basandose principalmente en la informacion visual captada por las camaras monoculares. Ademas, esta tesis doctoral presenta la ventaja de usar sensores visuales bien como fuente principal de datos o complementando a otros sensores en el suministro de informacion util, con el fin de mejorar la precision y la robustez de los procesos de deteccion. La tesis cubre, principalmente, varios temas de investigacion basados en tecnicas de vision por computador: (I) Estimacion de la Posicion y la Orientacion (Pose), para proporcionar una solucion a la estimacion de la posicion y orientacion en 6D del UAV. Este algoritmo se basa en la combinacion del detector SIFT y el descriptor FREAK, que mantiene el desempeno del a funcion de puntos de coincidencia y disminuye el tiempo computacional. De esta manera, se soluciona el problema de la estimacion de la posicion basandose en la descomposicion de las homografias mundo a imagen e imagen a imagen. (II) Deteccion obstaculos y elusion colisiones, donde el UAV es capaz de percibir y detectar los obstaculos frontales que se encuentran en su camino. El algoritmo de deteccion imita comportamientos humanos para detectar los obstaculos que se acercan, mediante el analisis de la magnitud del cambio de los puntos caracteristicos detectados de referencia, combinado con los ratios de expansion de los contornos convexos construidos alrededor de los puntos caracteristicos detectados en frames consecutivos. A continuacion, comparando la proporcion del area del obstaculo y la posicion del UAV, el metodo decide si el obstaculo detectado puede provocar una colision. Por ultimo, el algoritmo extrae las zonas libres de colision alrededor del obstaculo y combinandolo con los puntos de referencia, elUAV realiza la maniobra de evasion. (III) Guiado de navegacion, que genera los puntos de referencia para determinar la trayectoria de vuelo basada en el entorno y en los obstaculos detectados que encuentra. Proporciona una estrategia para seguir los segmentos del trazado de una manera eficiente y realizar la maniobra de vuelo con suavidad. (IV) Guiado por Vision, para ofrecer soluciones de control diferentes (Control de Logica Fuzzy (FLC) y PID), basados en la informacion visual obtenida con el fin de lograr la estabilidad de vuelo, asi como realizar la maniobra correcta para evitar posibles colisiones y seguir los puntos de referencia. Todos los algoritmos propuestos han sido verificados con vuelos reales en ambientes exteriores e interiores, tomando en consideracion condiciones visuales como la iluminacion y las texturas. Los resultados obtenidos han sido validados con otros sistemas: como el sistema de captura de movimiento VICON y DGPS en el caso del algoritmo de estimacion de la posicion y orientacion. Ademas, los algoritmos propuestos han sido comparados con trabajos anteriores recogidos en el estado del arte con resultados que demuestran una mejora de la precision y la robustez de los algoritmos propuestos. Esta tesis doctoral concluye que los sensores visuales tienen las ventajes de tener un peso ligero y un bajo consumo y, proporcionar informacion fiable, lo cual lo hace una poderosa herramienta en los sistemas de navegacion para aumentar la autonomia de los UAVs en aplicaciones del mundo real.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Carlo Regazzoni.- Secretario: Fernando García Fernández.- Vocal: Pascual Campoy Cerver

    Micro Aerial Vehicles (MAV) Assured Navigation in Search and Rescue Missions Robust Localization, Mapping and Detection

    Get PDF
    This Master's Thesis describes the developments on robust localization, mapping and detection algorithms for Micro Aerial Vehicles (MAVs). The localization method proposes a seamless indoor-outdoor multi-sensor architecture. This algorithm is capable of using all or a subset of its sensor inputs to determine a platform's position, velocity and attitude (PVA). It relies on the Inertial Measurement Unit as the core sensor and monitors the status and observability of the secondary sensors to select the most optimum estimator strategy for each situation. Furthermore, it ensures a smooth transition between filters structures. This document also describes the integration mechanism for a set of common sensors such as GNSS receivers, laser scanners and stereo and mono cameras. The mapping algorithm provides a fully automated fast aerial mapping pipeline. It speeds up the process by pre-selecting the images using the flight plan and the onboard localization. Furthermore, it relies on Structure from Motion (SfM) techniques to produce an optimized 3D reconstruction of camera locations and sparse scene geometry. These outputs are used to compute the perspective transformations that project the raw images on the ground and produce a geo-referenced map. Finally, these maps are fused with other domains in a collaborative UGV and UAV mapping algorithms. The real-time aerial detection of victims is based on a thermal camera. The algorithm is composed by three steps. Firstly, a normalization of the image is performed to get rid of the background and to extract the regions of interest. Later the victim detection and tracking steps produce the real-time geo-referenced locations of the detections. The thesis also proposes the concept of a MAV Copilot, a payload composed by a set of sensors and algorithm the enhances the capabilities of any commercial MAV. To develop and validate these contributions, a prototype of a search and rescue MAV and the Copilot has been developed. These developments have been validated in three large-scale demonstrations of search and rescue operations in the context of the European project ICARUS: a shipwreck in Lisbon (Portugal), an earthquake in Marche (Belgium), and the Fukushima nuclear disaster in the euRathlon 2015 competition in Piombino (Italy)

    Template matching based TRN using Flash LiDAR

    Get PDF
    학위논문 (석사)-- 서울대학교 대학원 : 공과대학 기계항공공학부, 2018. 8. 박찬국.This thesis compares and analyzes a performance of template matching based terrain referenced navigation (TMTRN) using correlation functions according to different error types and correlation functions. Conventional batch processing TRN generally utilizes the radar altimeter and adopts mean square difference (MSD), mean absolute difference (MAD), and normalized cross correlation (NCC) for matching a batch profile with terrain database. If a flash LiDAR is utilized instead of the radar, it is possible to build a profile in one-shot. A point cloud of the flash LiDAR can be transformed into 2D profile, unlike a vector profile obtained from batch processing. Therefore, by using the flash LiDAR we can apply new correlation functions such as image Euclidean distance (IMED) and image normalized cross correlation (IMNCC) which have been used in computer vision field. The simulation result shows that IMED is the most robust for different types of errors.Chapter 1 Introduction 1 1.1 Motivation and background . . . . . . . . . . . . . . . . . . . . . 1 1.2 Objectives and contributions . . . . . . . . . . . . . . . . . . . . 3 Chapter 2 Related Works 5 2.1 Terrain Referenced Navigation . . . . . . . . . . . . . . . . . . . 5 2.1.1 LiDAR-based TRN . . . . . . . . . . . . . . . . . . . . . . 10 2.1.2 Image-based TRN . . . . . . . . . . . . . . . . . . . . . . 13 2.2 Template Matching . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2.1 General idea of template matching . . . . . . . . . . . . . 16 2.2.2 Correlation function . . . . . . . . . . . . . . . . . . . . . 17 Chapter 3 Template matching based TRN 22 3.1 Relationship with BPTRN . . . . . . . . . . . . . . . . . . . . . . 22 3.2 TMTRN algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Chapter 4 Simulation Results 30 4.1 Template matching of terrain PC . . . . . . . . . . . . . . . . . . 30 4.2 TMTRN simulation . . . . . . . . . . . . . . . . . . . . . . . . . 33 Chapter 5 Conclusions and Future Works 46 5.1 Summary of the contribution . . . . . . . . . . . . . . . . . . . . 46 5.2 Future works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47Maste

    Large-area visually augmented navigation for autonomous underwater vehicles

    Get PDF
    Submitted to the Joint Program in Applied Ocean Science & Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2005This thesis describes a vision-based, large-area, simultaneous localization and mapping (SLAM) algorithm that respects the low-overlap imagery constraints typical of autonomous underwater vehicles (AUVs) while exploiting the inertial sensor information that is routinely available on such platforms. We adopt a systems-level approach exploiting the complementary aspects of inertial sensing and visual perception from a calibrated pose-instrumented platform. This systems-level strategy yields a robust solution to underwater imaging that overcomes many of the unique challenges of a marine environment (e.g., unstructured terrain, low-overlap imagery, moving light source). Our large-area SLAM algorithm recursively incorporates relative-pose constraints using a view-based representation that exploits exact sparsity in the Gaussian canonical form. This sparsity allows for efficient O(n) update complexity in the number of images composing the view-based map by utilizing recent multilevel relaxation techniques. We show that our algorithmic formulation is inherently sparse unlike other feature-based canonical SLAM algorithms, which impose sparseness via pruning approximations. In particular, we investigate the sparsification methodology employed by sparse extended information filters (SEIFs) and offer new insight as to why, and how, its approximation can lead to inconsistencies in the estimated state errors. Lastly, we present a novel algorithm for efficiently extracting consistent marginal covariances useful for data association from the information matrix. In summary, this thesis advances the current state-of-the-art in underwater visual navigation by demonstrating end-to-end automatic processing of the largest visually navigated dataset to date using data collected from a survey of the RMS Titanic (path length over 3 km and 3100 m2 of mapped area). This accomplishment embodies the summed contributions of this thesis to several current SLAM research issues including scalability, 6 degree of freedom motion, unstructured environments, and visual perception.This work was funded in part by the CenSSIS ERC of the National Science Foundation under grant EEC-9986821, in part by the Woods Hole Oceanographic Institution through a grant from the Penzance Foundation, and in part by a NDSEG Fellowship awarded through the Department of Defense

    Terrain Referenced Navigation Using SIFT Features in LiDAR Range-Based Data

    Get PDF
    The use of GNSS in aiding navigation has become widespread in aircraft. The long term accuracy of INS are enhanced by frequent updates of the highly precise position estimations GNSS provide. Unfortunately, operational environments exist where constant signal or the requisite number of satellites are unavailable, significantly degraded, or intentionally denied. This thesis describes a novel algorithm that uses scanning LiDAR range data, computer vision features, and a reference database to generate aircraft position estimations to update drifting INS estimates. The algorithm uses a single calibrated scanning LiDAR to sample the range and angle to the ground as an aircraft flies, forming a point cloud. The point cloud is orthorectified into a coordinate system common to a previously recorded reference of the flyover region. The point cloud is then interpolated into a Digital Elevation Model (DEM) of the ground. Range-based SIFT features are then extracted from both the airborne and reference DEMs. Features common to both the collected and reference range images are selected using a SIFT descriptor search. Geometrically inconsistent features are filtered out using RANSAC outlier removal, and surviving features are projected back to their source coordinates in the original point cloud. The point cloud features are used to calculate a least squares correspondence transform that aligns the collected features to the reference features. Applying the correspondence that best aligns the ground features is then applied to the nominal aircraft position, creating a new position estimate. The algorithm was tested on legacy flight data and typically produces position estimates within 10 meters of truth using threshold conditions
    corecore