65 research outputs found

    A framework for autonomous mission and guidance control of unmanned aerial vehicles based on computer vision techniques

    Get PDF
    A computação visual é uma área do conhecimento que estuda o desenvolvimento de sistemas artificiais capazes de detectar e desenvolver a percepção do meio ambiente através de informações de imagem ou dados multidimensionais. A percepção visual e a manipulação são combinadas em sistemas robóticos através de duas etapas "olhar"e depois "movimentar-se", gerando um laço de controle de feedback visual. Neste contexto, existe um interesse crescimente no uso dessas técnicas em veículos aéreos não tripulados (VANTs), também conhecidos como drones. Essas técnicas são aplicadas para posicionar o drone em modo de vôo autônomo, ou para realizar a detecção de regiões para vigilância aérea ou pontos de interesse. Os sistemas de computação visual geralmente tomam três passos em sua operação, que são: aquisição de dados em forma numérica, processamento de dados e análise de dados. A etapa de aquisição de dados é geralmente realizada por câmeras e sensores de proximidade. Após a aquisição de dados, o computador embarcado realiza o processamento de dados executando algoritmos com técnicas de medição (variáveis, índice e coeficientes), detecção (padrões, objetos ou áreas) ou monitoramento (pessoas, veículos ou animais). Os dados processados são analisados e convertidos em comandos de decisão para o controle para o sistema robótico autônomo Visando realizar a integração dos sistemas de computação visual com as diferentes plataformas de VANTs, este trabalho propõe o desenvolvimento de um framework para controle de missão e guiamento de VANTs baseado em visão computacional. O framework é responsável por gerenciar, codificar, decodificar e interpretar comandos trocados entre as controladoras de voo e os algoritmos de computação visual. Como estudo de caso, foram desenvolvidos dois algoritmos destinados à aplicação em agricultura de precisão. O primeiro algoritmo realiza o cálculo de um coeficiente de reflectância visando a aplicação auto-regulada e eficiente de agroquímicos, e o segundo realiza a identificação das linhas de plantas para realizar o guiamento dos VANTs sobre a plantação. O desempenho do framework e dos algoritmos propostos foi avaliado e comparado com o estado da arte, obtendo resultados satisfatórios na implementação no hardware embarcado.Cumputer Vision is an area of knowledge that studies the development of artificial systems capable of detecting and developing the perception of the environment through image information or multidimensional data. Nowadays, vision systems are widely integrated into robotic systems. Visual perception and manipulation are combined in two steps "look" and then "move", generating a visual feedback control loop. In this context, there is a growing interest in using computer vision techniques in unmanned aerial vehicles (UAVs), also known as drones. These techniques are applied to position the drone in autonomous flight mode, or to perform the detection of regions for aerial surveillance or points of interest. Computer vision systems generally take three steps to the operation, which are: data acquisition in numerical form, data processing and data analysis. The data acquisition step is usually performed by cameras or proximity sensors. After data acquisition, the embedded computer performs data processing by performing algorithms with measurement techniques (variables, index and coefficients), detection (patterns, objects or area) or monitoring (people, vehicles or animals). The resulting processed data is analyzed and then converted into decision commands that serve as control inputs for the autonomous robotic system In order to integrate the visual computing systems with the different UAVs platforms, this work proposes the development of a framework for mission control and guidance of UAVs based on computer vision. The framework is responsible for managing, encoding, decoding, and interpreting commands exchanged between flight controllers and visual computing algorithms. As a case study, two algorithms were developed to provide autonomy to UAVs intended for application in precision agriculture. The first algorithm performs the calculation of a reflectance coefficient used to perform the punctual, self-regulated and efficient application of agrochemicals. The second algorithm performs the identification of crop lines to perform the guidance of the UAVs on the plantation. The performance of the proposed framework and proposed algorithms was evaluated and compared with the state of the art, obtaining satisfactory results in the implementation of embedded hardware

    A Review on IoT Deep Learning UAV Systems for Autonomous Obstacle Detection and Collision Avoidance

    Get PDF
    [Abstract] Advances in Unmanned Aerial Vehicles (UAVs), also known as drones, offer unprecedented opportunities to boost a wide array of large-scale Internet of Things (IoT) applications. Nevertheless, UAV platforms still face important limitations mainly related to autonomy and weight that impact their remote sensing capabilities when capturing and processing the data required for developing autonomous and robust real-time obstacle detection and avoidance systems. In this regard, Deep Learning (DL) techniques have arisen as a promising alternative for improving real-time obstacle detection and collision avoidance for highly autonomous UAVs. This article reviews the most recent developments on DL Unmanned Aerial Systems (UASs) and provides a detailed explanation on the main DL techniques. Moreover, the latest DL-UAV communication architectures are studied and their most common hardware is analyzed. Furthermore, this article enumerates the most relevant open challenges for current DL-UAV solutions, thus allowing future researchers to define a roadmap for devising the new generation affordable autonomous DL-UAV IoT solutions.Xunta de Galicia; ED431C 2016-045Xunta de Galicia; ED431C 2016-047Xunta de Galicia; , ED431G/01Centro Singular de Investigación de Galicia; PC18/01Agencia Estatal de Investigación de España; TEC2016-75067-C4-1-

    A unified vision and inertial navigation system for planetary hoppers

    Get PDF
    Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (pages 139-146).In recent years, considerable attention has been paid to hopping as a novel mode of planetary exploration. Hopping vehicles provide advantages over traditional surface exploration vehicles, such as wheeled rovers, by enabling in-situ measurements in otherwise inaccessible terrain. However, significant development over previously demonstrated vehicle navigation technologies is required to overcome the inherent challenges involved in navigating a hopping vehicle, especially in adverse terrain. While hoppers are in many ways similar to traditional landers and surface explorers, they incorporate additional, unique motions that must be accounted for beyond those of conventional planetary landing and surface navigation systems. This thesis describes a unified vision and inertial navigation system for propulsive planetary hoppers and provides demonstration of this technology. An architecture for a navigation system specific to the motions and mission profiles of hoppers is presented, incorporating unified inertial and terrain-relative navigation solutions. A modular sensor testbed, including a stereo vision package and inertial measurement unit, was developed to act as a proof-of-concept for this navigation system architecture. The system is shown to be capable of real-time output of an accurate navigation state estimate for motions and trajectories similar to those of planetary hoppers.by Theodore J. Steiner, III.S.M

    Image-Aided Navigation Using Cooperative Binocular Stereopsis

    Get PDF
    This thesis proposes a novel method for cooperatively estimating the positions of two vehicles in a global reference frame based on synchronized image and inertial information. The proposed technique - cooperative binocular stereopsis - leverages the ability of one vehicle to reliably localize itself relative to the other vehicle using image data which enables motion estimation from tracking the three dimensional positions of common features. Unlike popular simultaneous localization and mapping (SLAM) techniques, the method proposed in this work does not require that the positions of features be carried forward in memory. Instead, the optimal vehicle motion over a single time interval is estimated from the positions of common features using a modified bundle adjustment algorithm and is used as a measurement in a delayed state extended Kalman filter (EKF). The developed system achieves improved motion estimation as compared to previous work and is a potential alternative to map-based SLAM algorithms

    PIXHAWK: A micro aerial vehicle design for autonomous flight using onboard computer vision

    Get PDF
    We describe a novel quadrotor Micro Air Vehicle (MAV) system that is designed to use computer vision algorithms within the flight control loop. The main contribution is a MAV system that is able to run both the vision-based flight control and stereo-vision-based obstacle detection parallelly on an embedded computer onboard the MAV. The system design features the integration of a powerful onboard computer and the synchronization of IMU-Vision measurements by hardware timestamping which allows tight integration of IMU measurements into the computer vision pipeline. We evaluate the accuracy of marker-based visual pose estimation for flight control and demonstrate marker-based autonomous flight including obstacle detection using stereo vision. We also show the benefits of our IMU-Vision synchronization for egomotion estimation in additional experiments where we use the synchronized measurements for pose estimation using the 2pt+gravity formulation of the PnP proble

    A Deep Neural Network Sensor for Visual Servoing in 3D Spaces

    Get PDF

    Expanding Navigation Systems by Integrating It with Advanced Technologies

    Get PDF
    Navigation systems provide the optimized route from one location to another. It is mainly assisted by external technologies such as Global Positioning System (GPS) and satellite-based radio navigation systems. GPS has many advantages such as high accuracy, available anywhere, reliable, and self-calibrated. However, GPS is limited to outdoor operations. The practice of combining different sources of data to improve the overall outcome is commonly used in various domains. GIS is already integrated with GPS to provide the visualization and realization aspects of a given location. Internet of things (IoT) is a growing domain, where embedded sensors are connected to the Internet and so IoT improves existing navigation systems and expands its capabilities. This chapter proposes a framework based on the integration of GPS, GIS, IoT, and mobile communications to provide a comprehensive and accurate navigation solution. In the next section, we outline the limitations of GPS, and then we describe the integration of GIS, smartphones, and GPS to enable its use in mobile applications. For the rest of this chapter, we introduce various navigation implementations using alternate technologies integrated with GPS or operated as standalone devices

    Visual Appearance Analysis of Forest Scenes for Monocular SLAM

    Get PDF
    Monocular simultaneous localisation and mapping (SLAM) is a cheap and energy efficient way to enable Unmanned Aerial Vehicles (UAVs) to safely navigate managed forests and gather data crucial for monitoring tree health. SLAM research, however, has mostly been conducted in structured human environments, and as such is poorly adapted to unstructured forests. In this paper, we compare the performance of state of the art monocular SLAM systems on forest data and use visual appearance statistics to characterise the differences between forests and other environments, including a photorealistic simulated forest. We find that SLAM systems struggle with all but the most straightforward forest terrain and identify key attributes (lighting changes and in-scene motion) which distinguish forest scenes from "classic" urban datasets. These differences offer an insight into what makes forests harder to map and open the way for targeted improvements. We also demonstrate that even simulations that look impressive to the human eye can fail to properly reflect the difficult attributes of the environment they simulate, and provide suggestions for more closely mimicking natural scenes.Comment: Accepted to ICRA 201
    corecore