1,623 research outputs found

    Cooperative heterogeneous robots for autonomous insects trap monitoring system in a precision agriculture scenario

    Get PDF
    The recent advances in precision agriculture are due to the emergence of modern robotics systems. For instance, unmanned aerial systems (UASs) give new possibilities that advance the solution of existing problems in this area in many different aspects. The reason is due to these platforms’ ability to perform activities at varying levels of complexity. Therefore, this research presents a multiple-cooperative robot solution for UAS and unmanned ground vehicle (UGV) systems for their joint inspection of olive grove inspect traps. This work evaluated the UAS and UGV vision-based navigation based on a yellow fly trap fixed in the trees to provide visual position data using the You Only Look Once (YOLO) algorithms. The experimental setup evaluated the fuzzy control algorithm applied to the UAS to make it reach the trap efficiently. Experimental tests were conducted in a realistic simulation environment using a robot operating system (ROS) and CoppeliaSim platforms to verify the methodology’s performance, and all tests considered specific real-world environmental conditions. A search and landing algorithm based on augmented reality tag (AR-Tag) visual processing was evaluated to allow for the return and landing of the UAS to the UGV base. The outcomes obtained in this work demonstrate the robustness and feasibility of the multiple-cooperative robot architecture for UGVs and UASs applied in the olive inspection scenario.The authors would like to thank the Foundation for Science and Technology (FCT, Portugal) for financial support through national funds FCT/MCTES (PIDDAC) to CeDRI (UIDB/05757/2020 and UIDP/05757/2020) and SusTEC (LA/P/0007/2021). In addition, the authors would like to thank the following Brazilian Agencies CEFET-RJ, CAPES, CNPq, and FAPERJ. In addition, the authors also want to thank the Research Centre in Digitalization and Intelligent Robotics (CeDRI), Instituto Politécnico de Braganca (IPB) - Campus de Santa Apolonia, Portugal, Laboratório Associado para a Sustentabilidade e Tecnologia em Regiões de Montanha (SusTEC), Portugal, INESC Technology and Science - Porto, Portugal and Universidade de Trás-os-Montes e Alto Douro - Vila Real, Portugal. This work was carried out under the Project “OleaChain: Competências para a sustentabilidade e inovação da cadeia de valor do olival tradicional no Norte Interior de Portugal” (NORTE-06-3559-FSE-000188), an operation used to hire highly qualified human resources, funded by NORTE 2020 through the European Social Fund (ESF).info:eu-repo/semantics/publishedVersio

    Autonomous environmental protection drone

    Get PDF
    During the summer, forest fires are the main reason for deforestation and the damage caused to homes and property in different communities around the world. The use of Unmanned Aerial Vehicles (UAVs, and also known as drones) applications has increased in recent years, making them an excellent solution for difficult tasks such as wildlife conservation and forest fire prevention. A forest fire detection system can be an answer to these tasks. Using a visual camera and a Convolutional Neural Network (CNN) for image processing with an UAV can result in an efficient fire detection system. However, in order to be able to have a fully autonomous system, without human intervention, for 24-hour fire observation and detection in a given geographical area, it requires a platform and automatic recharging procedures. This dissertation combines the use of technologies such as CNNs, Real Time Kinematics (RTK) and Wireless Power Transfer (WPT) with an on-board computer and software, resulting in a fully automated system to make forest surveillance more efficient and, in doing so, reallocating human resources to other locations where they are most needed.Durante o verão, os incêndios florestais constituem a principal razão do desflorestamento e dos danos causados às casas e aos bens das diferentes comunidades de todo o mundo. A utilização de veículos aéreos não tripulados (VANTs), em inglês denominados por Unmanned Aerial Vehicles (UAVs) ou Drones, aumentou nos últimos anos, tornando-os uma excelente solução para tarefas difíceis como a conservação da vida selvagem e prevenção de incêndios florestais. Um sistema de deteção de incêndio florestal pode ser uma resposta para essas tarefas. Com a utilização de uma câmara visual e uma Rede Neuronal Convolucional (RNC) para processamento de imagem com um UAV pode resultar num eficiente sistema de deteção de incêndio. No entanto, para que seja possível ter um sistema completamente autónomo, sem intervenção humana, para observação e deteção de incêndios durante 24 horas, numa dada área geográfica, requer uma plataforma e procedimentos de recarga automática. Esta dissertação reúne o uso de tecnologias como RNCs, posicionamento cinemático em tempo real (RTK) e transferência de energia sem fios (WPT) com um computador e software de bordo, resultando num sistema totalmente automatizado para tornar a vigilância florestal mais eficiente e, ao fazê-lo, realocando recursos humanos para outros locais, onde estes são mais necessários

    Small unmanned airborne systems to support oil and gas pipeline monitoring and mapping

    Get PDF
    Acknowledgments We thank Johan Havelaar, Aeryon Labs Inc., AeronVironment Inc. and Aeronautics Inc. for kindly permitting the use of materials in Fig. 1.Peer reviewedPublisher PD

    Technical Challenges for Multi-Temporal and Multi-Sensor Image Processing Surveyed by UAV for Mapping and Monitoring in Precision Agriculture

    Get PDF
    Precision Agriculture (PA) is an approach to maximizing crop productivity in a sustainable manner. PA requires up-to-date, accurate and georeferenced information on crops, which can be collected from different sensors from ground, aerial or satellite platforms. The use of optical and thermal sensors from Unmanned Aerial Vehicle (UAV) platform is an emerging solution for mapping and monitoring in PA, yet many technological challenges are still open. This technical note discusses the choice of UAV type and its scientific payload for surveying a sample area of 5 hectares, as well as the procedures for replicating the study on a larger scale. This case study is an ideal opportunity to test the best practices to combine the requirements of PA surveys with the limitations imposed by local UAV regulations. In the field area, to follow crop development at various stages, nine flights over a period of four months were planned and executed. The usage of ground control points for optimal georeferencing and accurate alignment of maps created by multi-temporal processing is analyzed. Output maps are produced in both visible and thermal bands, after appropriate strip alignment, mosaicking, sensor calibration, and processing with Structure from Motion techniques. The discussion of strategies, checklists, workflow, and processing is backed by data from more than 5000 optical and radiometric thermal images taken during five hours of flight time in nine flights throughout the crop season. The geomatics challenges of a georeferenced survey for PA using UAVs are the key focus of this technical note. Accurate maps derived from these multi-temporal and multi-sensor surveys feed Geographic Information Systems (GIS) and Decision Support Systems (DSS) to benefit PA in a multidisciplinary approach

    A framework for autonomous mission and guidance control of unmanned aerial vehicles based on computer vision techniques

    Get PDF
    A computação visual é uma área do conhecimento que estuda o desenvolvimento de sistemas artificiais capazes de detectar e desenvolver a percepção do meio ambiente através de informações de imagem ou dados multidimensionais. A percepção visual e a manipulação são combinadas em sistemas robóticos através de duas etapas "olhar"e depois "movimentar-se", gerando um laço de controle de feedback visual. Neste contexto, existe um interesse crescimente no uso dessas técnicas em veículos aéreos não tripulados (VANTs), também conhecidos como drones. Essas técnicas são aplicadas para posicionar o drone em modo de vôo autônomo, ou para realizar a detecção de regiões para vigilância aérea ou pontos de interesse. Os sistemas de computação visual geralmente tomam três passos em sua operação, que são: aquisição de dados em forma numérica, processamento de dados e análise de dados. A etapa de aquisição de dados é geralmente realizada por câmeras e sensores de proximidade. Após a aquisição de dados, o computador embarcado realiza o processamento de dados executando algoritmos com técnicas de medição (variáveis, índice e coeficientes), detecção (padrões, objetos ou áreas) ou monitoramento (pessoas, veículos ou animais). Os dados processados são analisados e convertidos em comandos de decisão para o controle para o sistema robótico autônomo Visando realizar a integração dos sistemas de computação visual com as diferentes plataformas de VANTs, este trabalho propõe o desenvolvimento de um framework para controle de missão e guiamento de VANTs baseado em visão computacional. O framework é responsável por gerenciar, codificar, decodificar e interpretar comandos trocados entre as controladoras de voo e os algoritmos de computação visual. Como estudo de caso, foram desenvolvidos dois algoritmos destinados à aplicação em agricultura de precisão. O primeiro algoritmo realiza o cálculo de um coeficiente de reflectância visando a aplicação auto-regulada e eficiente de agroquímicos, e o segundo realiza a identificação das linhas de plantas para realizar o guiamento dos VANTs sobre a plantação. O desempenho do framework e dos algoritmos propostos foi avaliado e comparado com o estado da arte, obtendo resultados satisfatórios na implementação no hardware embarcado.Cumputer Vision is an area of knowledge that studies the development of artificial systems capable of detecting and developing the perception of the environment through image information or multidimensional data. Nowadays, vision systems are widely integrated into robotic systems. Visual perception and manipulation are combined in two steps "look" and then "move", generating a visual feedback control loop. In this context, there is a growing interest in using computer vision techniques in unmanned aerial vehicles (UAVs), also known as drones. These techniques are applied to position the drone in autonomous flight mode, or to perform the detection of regions for aerial surveillance or points of interest. Computer vision systems generally take three steps to the operation, which are: data acquisition in numerical form, data processing and data analysis. The data acquisition step is usually performed by cameras or proximity sensors. After data acquisition, the embedded computer performs data processing by performing algorithms with measurement techniques (variables, index and coefficients), detection (patterns, objects or area) or monitoring (people, vehicles or animals). The resulting processed data is analyzed and then converted into decision commands that serve as control inputs for the autonomous robotic system In order to integrate the visual computing systems with the different UAVs platforms, this work proposes the development of a framework for mission control and guidance of UAVs based on computer vision. The framework is responsible for managing, encoding, decoding, and interpreting commands exchanged between flight controllers and visual computing algorithms. As a case study, two algorithms were developed to provide autonomy to UAVs intended for application in precision agriculture. The first algorithm performs the calculation of a reflectance coefficient used to perform the punctual, self-regulated and efficient application of agrochemicals. The second algorithm performs the identification of crop lines to perform the guidance of the UAVs on the plantation. The performance of the proposed framework and proposed algorithms was evaluated and compared with the state of the art, obtaining satisfactory results in the implementation of embedded hardware

    Safe Autonomous Aerial Surveys of Historical Building Interiors

    Get PDF
    Cílem této práce je vývoj systému pro bezpečný autonomní průzkum interiérů historických budov za pomocí vícerotorových autonomních bezpilotních helikoptér. Navržené řešení zahrnuje metodu pro sledování požadované trajektorie založené na přístupu lídr-následovník a prediktivním řízení, detekci potenciálních chyb a systému pro řízení mise, který zprostředkovává spolupráci mezi jednotlivými členy formace a korektní reakci na nastalé chyby jednotlivých podsystémů. Návrh celého systému je ovlivněn jeho plánovaným nasazením v rámci skenování interiérů historických budov. Funkčnost navrženého systému je nejprve otestována v rámci početných simulací a následně během experimentu s reálnými bezpilotními helikoptérami.This thesis is aimed at development of the system for safe autonomous survey of historical building interiors by the cooperative formation of multi-rotor unmanned aerial vehicles (UAVs). The proposed solution involves the method for safe trajectory tracking based on the leader-follower scheme and model predictive control, detection of potential faults and failures, and the mission controller which ensures the control of cooperation of particular UAVs and proper reaction on occurrence of faults and failures. The proposition of the whole system is influenced by the aim at its deployment in real world scenarios motivated by the documentation of historical monuments. The developed system is firstly evaluated in simulations. After that, it is tested in a real world scenario with the real UAVs

    Planning for perception and perceiving for decision: POMDP-like online target detection and recognition for autonomous UAVs

    Get PDF
    This paper studies the use of POMDP-like techniques to tackle an online multi-target detection and recognition mission by an autonomous rotorcraft UAV. Such robotics missions are complex and too large to be solved off-line, and acquiring information about the environment is as important as achieving some symbolic goals. The POMDP model deals in a single framework with both perception actions (controlling the camera's view angle), and mission actions (moving between zones and flight levels, landing) needed to achieve the goal of the mission, i.e. landing in a zone containing a car whose model is recognized as a desired target model with sufficient belief. We explain how we automatically learned the probabilistic observation POMDP model from statistical analysis of the image processing algorithm used on-board the UAV to analyze objects in the scene. We also present our "optimize-while-execute" framework, which drives a POMDP sub-planner to optimize and execute the POMDP policy in parallel under action duration constraints, reasoning about the future possible execution states of the robotic system. Finally, we present experimental results, which demonstrate that Artificial Intelligence techniques like POMDP planning can be successfully applied in order to automatically control perception and mission actions hand-in-hand for complex time-constrained UAV missions

    Unmanned Systems Sentinel / 3 June 2016

    Get PDF
    Approved for public release; distribution is unlimited

    Military Application of Aerial Photogrammetry Mapping Assisted by Small Unmanned Air Vehicles

    Get PDF
    This research investigated the practical military applications of the photogrammetric methods using remote sensing assisted by small unmanned aerial vehicles (SUAVs). The research explored the feasibility of UAV aerial mapping in terms of the specific military purposes, focusing on the geolocational and measurement accuracy of the digital models, and image processing time. The research method involved experimental flight tests using low-cost Commercial off-the-shelf (COTS) components, sensors and image processing tools to study key features of the method required in military like location accuracy, time estimation, and measurement capability. Based on the results of the data analysis, two military applications are defined to justify the feasibility and utility of the methods. The first application is to assess the damage of an attacked military airfield using photogrammetric digital models. Using a hex-rotor test platform with Sony A6000 camera, georeferenced maps with 1 meter accuracy was produced and with sufficient resolution (about 1 cm/pixel) to identify foreign objects on the runway. The other case examines the utility and quality of the targeting system using geo-spatial data from reconstructed 3-Dimensional (3-D) photogrammetry models. By analyzing 3-D model, operable targeting under 1meter accuracy with only 5 percent error on distance, area, and volume wer
    corecore