110 research outputs found

    Review of UAV positioning in indoor environments and new proposal based on US measurements

    Get PDF
    Este documento se considera que es una ponencia de congresos en lugar de un capítulo de libro.10th International Conference on Indoor Positioning and Indoor Navigation (IPIN 2019) Pisa, Italy, September 30th - October 3rd, 2019The use of unmanned aerial vehicles (UAVs) has increased dramatically in recent years because of their huge potential in both civil and military applications and the decrease in prize of UAVs products. Location detection can be implemented through GNSS technology in outdoor environments, nevertheless its accuracy could be insufficient for some applications. Usability of GNSS in indoor environments is limited due to the signal attenuation as it cross through walls or the absence of line of sight. Considering the big market opportunity of indoor UAVs many researchers are devoting their efforts in the exploration of solutions for their positioning. Indoor UAV applications include location based services (LBS), advertisement, ambient assisted living environments or emergency response. This work is an update survey in UAV indoor localization, so it can provide a guide and technical comparison perspective of different technologies with their main advantages and drawbacks. Finally, we propose an approach based on an ultrasonic local positioning system.Universidad de AlcaláJunta de Comunidades de Castilla-La ManchaMinisterio de Economía, Industria y Competitivida

    A multi-hypothesis approach for range-only simultaneous localization and mapping with aerial robots

    Get PDF
    Los sistemas de Range-only SLAM (o RO-SLAM) tienen como objetivo la construcción de un mapa formado por la posición de un conjunto de sensores de distancia y la localización simultánea del robot con respecto a dicho mapa, utilizando únicamente para ello medidas de distancia. Los sensores de distancia son dispositivos capaces de medir la distancia relativa entre cada par de dispositivos. Estos sensores son especialmente interesantes para su applicación a vehículos aéreos debido a su reducido tamaño y peso. Además, estos dispositivos son capaces de operar en interiores o zonas con carencia de señal GPS y no requieren de una línea de visión directa entre cada par de dispositivos a diferencia de otros sensores como cámaras o sensores laser, permitiendo así obtener una lectura de datos continuada sin oclusiones. Sin embargo, estos sensores presentan un modelo de observación no lineal con una deficiencia de rango debido a la carencia de información de orientación relativa entre cada par de sensores. Además, cuando se incrementa la dimensionalidad del problema de 2D a 3D para su aplicación a vehículos aéreos, el número de variables ocultas del modelo aumenta haciendo el problema más costoso computacionalmente especialmente ante implementaciones multi-hipótesis. Esta tesis estudia y propone diferentes métodos que permitan la aplicación eficiente de estos sistemas RO-SLAM con vehículos terrestres o aéreos en entornos reales. Para ello se estudia la escalabilidad del sistema en relación al número de variables ocultas y el número de dispositivos a posicionar en el mapa. A diferencia de otros métodos descritos en la literatura de RO-SLAM, los algoritmos propuestos en esta tesis tienen en cuenta las correlaciones existentes entre cada par de dispositivos especialmente para la integración de medidas estÃa˛ticas entre pares de sensores del mapa. Además, esta tesis estudia el ruido y las medidas espúreas que puedan generar los sensores de distancia para mejorar la robustez de los algoritmos propuestos con técnicas de detección y filtración. También se proponen métodos de integración de medidas de otros sensores como cámaras, altímetros o GPS para refinar las estimaciones realizadas por el sistema RO-SLAM. Otros capítulos estudian y proponen técnicas para la integración de los algoritmos RO-SLAM presentados a sistemas con múltiples robots, así como el uso de técnicas de percepción activa que permitan reducir la incertidumbre del sistema ante trayectorias con carencia de trilateración entre el robot y los sensores de destancia estáticos del mapa. Todos los métodos propuestos han sido validados mediante simulaciones y experimentos con sistemas reales detallados en esta tesis. Además, todos los sistemas software implementados, así como los conjuntos de datos registrados durante la experimentación han sido publicados y documentados para su uso en la comunidad científica

    A Vision-Based Navigation System for Perching Aircraft

    Get PDF
    This is the final version of the article. Available from Springer via the DOI in this record.This paper presents the investigation of the use of position-sensing diode (PSD) - a light source direction sensor - for designing a vision-based navigation system for a perching aircraft. Aircraft perching maneuvers mimic bird’s landing by climbing for touching down with low velocity or negligible impact. They are optimized to reduce their spatial requirements, like altitude gain or trajectory length. Due to disturbances and uncertainties, real-time perching is realized by tracking the optimal trajectories. As the performance of the controllers depends on the accuracy of estimated aircraft state, the use of PSD measurements as observations in the state estimation model is proposed to achieve precise landing. The performance and the suitability of this navigation system are investigated through numerical simulations. An optimal perching trajectory is computed by minimizing the trajectory length. Accelerations, angular-rates and PSD readings are determined from this trajectory and then added with experimentally obtained noise to create simulated sensor measurements. The initial state of the optimal perching trajectory is perturbed, and by assuming zero biases, extended Kalman filter is implemented for aircraft state estimation. It is shown that the errors between estimated and actual aircraft states reduce along the trajectory, validating the proposed navigation system

    Low computational SLAM for an autonomous indoor aerial inspection vehicle

    Get PDF
    The past decade has seen an increase in the capability of small scale Unmanned Aerial Vehicle (UAV) systems, made possible through technological advancements in battery, computing and sensor miniaturisation technology. This has opened a new and rapidly growing branch of robotic research and has sparked the imagination of industry leading to new UAV based services, from the inspection of power-lines to remote police surveillance. Miniaturisation of UAVs have also made them small enough to be practically flown indoors. For example, the inspection of elevated areas in hazardous or damaged structures where the use of conventional ground-based robots are unsuitable. Sellafield Ltd, a nuclear reprocessing facility in the U.K. has many buildings that require frequent safety inspections. UAV inspections eliminate the current risk to personnel of radiation exposure and other hazards in tall structures where scaffolding or hoists are required. This project focused on the development of a UAV for the novel application of semi-autonomously navigating and inspecting these structures without the need for personnel to enter the building. Development exposed a significant gap in knowledge concerning indoor localisation, specifically Simultaneous Localisation and Mapping (SLAM) for use on-board UAVs. To lower the on-board processing requirements of SLAM, other UAV research groups have employed techniques such as off-board processing, reduced dimensionality or prior knowledge of the structure, techniques not suitable to this application given the unknown nature of the structures and the risk of radio-shadows. In this thesis a novel localisation algorithm, which enables real-time and threedimensional SLAM running solely on-board a computationally constrained UAV in heavily cluttered and unknown environments is proposed. The algorithm, based on the Iterative Closest Point (ICP) method utilising approximate nearest neighbour searches and point-cloud decimation to reduce the processing requirements has successfully been tested in environments similar to that specified by Sellafield Ltd

    Long-term localization of unmanned aerial vehicles based on 3D environment perception

    Get PDF
    Los vehículos aéreos no tripulados (UAVs por sus siglas en inglés, Unmanned Aerial Vehicles) se utilizan actualmente en innumerables aplicaciones civiles y comerciales, y la tendencia va en aumento. Su operación en espacios exteriores libres de obstáculos basada en GPS (del inglés Global Positioning System) puede ser considerada resuelta debido a la disponibilidad de productos comerciales con cierto grado de madurez. Sin embargo, algunas aplicaciones requieren su uso en espacios confinados o en interiores, donde las señales del GPS no están disponibles. Para permitir la introducción de robots aéreos de manera segura en zonas sin cobertura GPS, es necesario mejorar la fiabilidad en determinadas tecnologías clave para conseguir una operación robusta del sistema, tales como la localización, la evitación de obstáculos y la planificación de trayectorias. Actualmente, las técnicas existentes para la navegación autónoma de robots móviles en zonas sin GPS no son suficientemente fiables cuando se trata de robots aéreos, o no son robustas en el largo plazo. Esta tesis aborda el problema de la localización, proponiendo una metodología adecuada para robots aéreos que se mueven en un entorno tridimensional, utilizando para ello una combinación de medidas obtenidas a partir de varios sensores a bordo. Nos hemos centrado en la fusión de datos procedentes de tres tipos de sensores: imágenes y nubes de puntos adquiridas a partir de cámaras estéreo o de luz estructurada (RGB-D), medidas inerciales de una IMU (del inglés Inertial Measurement Unit) y distancias entre radiobalizas de tecnología UWB (del inglés Ultra Wide-Band) instaladas en el entorno y en la propia aeronave. La localización utiliza un mapa 3D del entorno, para el cual se presenta también un algoritmo de mapeado que explora las sinergias entre nubes de puntos y radiobalizas, con el fin de poder utilizar la metodología al completo en cualquier escenario dado. Las principales contribuciones de esta tesis doctoral se centran en una cuidadosa combinación de tecnologías para lograr una localización de UAVs en interiores válida para operaciones a largo plazo, de manera que sea robusta, fiable y eficiente computacionalmente. Este trabajo ha sido validado y demostrado durante los últimos cuatro años en el contexto de diferentes proyectos de investigación relacionados con la localización y estimación del estado de robots aéreos en zonas sin cobertura GPS. En particular en el proyecto European Robotics Challenges (EuRoC), en el que el autor participa en la competición entre las principales instituciones de investigación de Europa. Los resultados experimentales demuestran la viabilidad de la metodología completa, tanto en términos de precisión como en eficiencia computacional, probados a través de vuelos reales en interiores y siendo éstos validados con datos de un sistema de captura de movimiento.Unmanned Aerial Vehicles (UAVs) are currently used in countless civil and commercial applications, and the trend is rising. Outdoor obstacle-free operation based on Global Positioning System (GPS) can be generally assumed thanks to the availability of mature commercial products. However, some applications require their use in confined spaces or indoors, where GPS signals are not available. In order to allow for the safe introduction of autonomous aerial robots in GPS-denied areas, there is still a need for reliability in several key technologies to procure a robust operation, such as localization, obstacle avoidance and planning. Existing approaches for autonomous navigation in GPS-denied areas are not robust enough when it comes to aerial robots, or fail in long-term operation. This dissertation handles the localization problem, proposing a methodology suitable for aerial robots moving in a Three Dimensional (3D) environment using a combination of measurements from a variety of on-board sensors. We have focused on fusing three types of sensor data: images and 3D point clouds acquired from stereo or structured light cameras, inertial information from an on-board Inertial Measurement Unit (IMU), and distance measurements to several Ultra Wide-Band (UWB) radio beacons installed in the environment. The overall approach makes use of a 3D map of the environment, for which a mapping method that exploits the synergies between point clouds and radio-based sensing is also presented, in order to be able to use the whole methodology in any given scenario. The main contributions of this dissertation focus on a thoughtful combination of technologies in order to achieve robust, reliable and computationally efficient long-term localization of UAVs in indoor environments. This work has been validated and demonstrated for the past four years in the context of different research projects related to the localization and state estimation of aerial robots in GPS-denied areas. In particular the European Robotics Challenges (EuRoC) project, in which the author is participating in the competition among top research institutions in Europe. Experimental results demonstrate the feasibility of our full approach, both in accuracy and computational efficiency, which is tested through real indoor flights and validated with data from a motion capture system

    Sensors Utilisation and Data Collection of Underground Mining

    Get PDF
    This study reviews IMU significance and performance for underground mine drone localisation. This research has designed a Kalman filter which extracts reliable information from raw data. Kalman filter for INS combines different measurements considering estimated errors to produce a trajectory including time, position and attitude. To evaluate the feasibility of the proposed method, a prototype has been designed and evaluated. Experimental results indicate that the designed Kalman filter estimates the internal states of a system

    Reference Model for Interoperability of Autonomous Systems

    Get PDF
    This thesis proposes a reference model to describe the components of an Un-manned Air, Ground, Surface, or Underwater System (UxS), and the use of a single Interoperability Building Block to command, control, and get feedback from such vehicles. The importance and advantages of such a reference model, with a standard nomenclature and taxonomy, is shown. We overview the concepts of interoperability and some efforts to achieve common refer-ence models in other areas. We then present an overview of existing un-manned systems, their history, characteristics, classification, and missions. The concept of Interoperability Building Blocks (IBB) is introduced to describe standards, protocols, data models, and frameworks, and a large set of these are analyzed. A new and powerful reference model for UxS, named RAMP, is proposed, that describes the various components that a UxS may have. It is a hierarchical model with four levels, that describes the vehicle components, the datalink, and the ground segment. The reference model is validated by showing how it can be applied in various projects the author worked on. An example is given on how a single standard was capable of controlling a set of heterogeneous UAVs, USVs, and UGVs

    A framework for autonomous mission and guidance control of unmanned aerial vehicles based on computer vision techniques

    Get PDF
    A computação visual é uma área do conhecimento que estuda o desenvolvimento de sistemas artificiais capazes de detectar e desenvolver a percepção do meio ambiente através de informações de imagem ou dados multidimensionais. A percepção visual e a manipulação são combinadas em sistemas robóticos através de duas etapas "olhar"e depois "movimentar-se", gerando um laço de controle de feedback visual. Neste contexto, existe um interesse crescimente no uso dessas técnicas em veículos aéreos não tripulados (VANTs), também conhecidos como drones. Essas técnicas são aplicadas para posicionar o drone em modo de vôo autônomo, ou para realizar a detecção de regiões para vigilância aérea ou pontos de interesse. Os sistemas de computação visual geralmente tomam três passos em sua operação, que são: aquisição de dados em forma numérica, processamento de dados e análise de dados. A etapa de aquisição de dados é geralmente realizada por câmeras e sensores de proximidade. Após a aquisição de dados, o computador embarcado realiza o processamento de dados executando algoritmos com técnicas de medição (variáveis, índice e coeficientes), detecção (padrões, objetos ou áreas) ou monitoramento (pessoas, veículos ou animais). Os dados processados são analisados e convertidos em comandos de decisão para o controle para o sistema robótico autônomo Visando realizar a integração dos sistemas de computação visual com as diferentes plataformas de VANTs, este trabalho propõe o desenvolvimento de um framework para controle de missão e guiamento de VANTs baseado em visão computacional. O framework é responsável por gerenciar, codificar, decodificar e interpretar comandos trocados entre as controladoras de voo e os algoritmos de computação visual. Como estudo de caso, foram desenvolvidos dois algoritmos destinados à aplicação em agricultura de precisão. O primeiro algoritmo realiza o cálculo de um coeficiente de reflectância visando a aplicação auto-regulada e eficiente de agroquímicos, e o segundo realiza a identificação das linhas de plantas para realizar o guiamento dos VANTs sobre a plantação. O desempenho do framework e dos algoritmos propostos foi avaliado e comparado com o estado da arte, obtendo resultados satisfatórios na implementação no hardware embarcado.Cumputer Vision is an area of knowledge that studies the development of artificial systems capable of detecting and developing the perception of the environment through image information or multidimensional data. Nowadays, vision systems are widely integrated into robotic systems. Visual perception and manipulation are combined in two steps "look" and then "move", generating a visual feedback control loop. In this context, there is a growing interest in using computer vision techniques in unmanned aerial vehicles (UAVs), also known as drones. These techniques are applied to position the drone in autonomous flight mode, or to perform the detection of regions for aerial surveillance or points of interest. Computer vision systems generally take three steps to the operation, which are: data acquisition in numerical form, data processing and data analysis. The data acquisition step is usually performed by cameras or proximity sensors. After data acquisition, the embedded computer performs data processing by performing algorithms with measurement techniques (variables, index and coefficients), detection (patterns, objects or area) or monitoring (people, vehicles or animals). The resulting processed data is analyzed and then converted into decision commands that serve as control inputs for the autonomous robotic system In order to integrate the visual computing systems with the different UAVs platforms, this work proposes the development of a framework for mission control and guidance of UAVs based on computer vision. The framework is responsible for managing, encoding, decoding, and interpreting commands exchanged between flight controllers and visual computing algorithms. As a case study, two algorithms were developed to provide autonomy to UAVs intended for application in precision agriculture. The first algorithm performs the calculation of a reflectance coefficient used to perform the punctual, self-regulated and efficient application of agrochemicals. The second algorithm performs the identification of crop lines to perform the guidance of the UAVs on the plantation. The performance of the proposed framework and proposed algorithms was evaluated and compared with the state of the art, obtaining satisfactory results in the implementation of embedded hardware

    Autonomous Approach and Landing Algorithms for Unmanned Aerial Vehicles

    Get PDF
    In recent years, several research activities have been developed in order to increase the autonomy features in Unmanned Aerial Vehicles (UAVs), to substitute human pilots in dangerous missions or simply in order to execute specific tasks more efficiently and cheaply. In particular, a significant research effort has been devoted to achieve high automation in the landing phase, so as to allow the landing of an aircraft without human intervention, also in presence of severe environmental disturbances. The worldwide research community agrees with the opportunity of the dual use of UAVs (for both military and civil purposes), for this reason it is very important to make the UAVs and their autolanding systems compliant with the actual and future rules and with the procedures regarding autonomous flight in ATM (Air Traffic Management) airspace in addition to the typical military aims of minimizing fuel, space or other important parameters during each autonomous task. Developing autolanding systems with a desired level of reliability, accuracy and safety involves an evolution of all the subsystems related to the guide, navigation and control disciplines. The main drawbacks of the autolanding systems available at the state of art concern or the lack of adaptivity of the trajectory generation and tracking to unpredicted external events, such as varied environmental condition and unexpected threats to avoid, or the missed compliance with the guide lines imposed by certification authorities of the proposed technologies used to get the desired above mentioned adaptivity. During his PhD period the author contributed to the development of an autonomous approach and landing system considering all the indispensable functionalities like: mission automation logic, runway data managing, sensor fusion for optimal estimation of vehicle state, trajectory generation and tracking considering optimality criteria, health management algorithms. In particular the system addressed in this thesis is capable to perform a fully adaptive autonomous landing starting from any point of the three dimensional space. The main novel feature of this algorithm is that it generates on line, with a desired updating rate or at a specified event, the nominal trajectory for the aircraft, based on the actual state of the vehicle and on the desired state at touch down point. Main features of the autolanding system based on the implementation of the proposed algorithm are: on line trajectory re-planning in the landing phase, fully autonomy from remote pilot inputs, weakly instrumented landing runway (without ILS availability), ability to land starting from any point in the space and autonomous management of failures and/or adverse atmospheric conditions, decision-making logic evaluation for key-decisions regarding possible execution of altitude recovery manoeuvre based on the Differential GPS integrity signal and compatible with the functionalities made available by the future GNSS system. All the algorithms developed allow reducing computational tractability of trajectory generation and tracking problems so as to be suitable for real time implementation and to still obtain a feasible (for the vehicle) robust and adaptive trajectory for the UAV. All the activities related to the current study have been conducted at CIRA (Italian Aerospace Research Center) in the framework of the aeronautical TECVOL project whose aim is to develop innovative technologies for the autonomous flight. The autolanding system was developed by the TECVOL team and the author’s contribution to it will be outlined in the thesis. Effectiveness of proposed algorithms has been then evaluated in real flight experiments, using the aeronautical flying demonstrator available at CIRA
    corecore