238 research outputs found

    Monocular 3D Scene Reconstruction for an Autonomous Unmanned Aerial Vehicle

    Get PDF
    Rekonstrukce 3D modelu prostředí je klíčovou částí autonomního letu bezpilotní helikoptéry (UAV). Kombinace inerciální měřicí jednotky (IMU) a kamery je běžnou a dostupnou senzorovou sadou, jež je schopna získat informaci o měřítku prostředí. Tato práce si klade za cíl vyvinout algoritmus řešící problém 3D rekostrukce pro tyto senzory za využití existujících metod vizuálně-inerciální lokalizace (VINS). V práci jsou navrženy dva algoritmy, odlišené způsobem, jakým extrahují korespondence mezi snímky: párovací algoritmus se širokou bází a algoritmus založený na trackingu s malou bází. Také je implementována metoda vylepšující výslednou 3D strukturu po letu. Algoritmy jsou otestovány na veřejně dostupné datové sadě. Navíc jsou otestovány v simulátoru a je proveden experiment v reálném prostředí. Výsledky ukazují, že algoritmus založený na trackingu dosahuje výrazně lepších výsledků. Navíc testy na datech a experimenty v reálném prostředí ukazují, že algoritmus může být nasazen v praktických aplikačních situacích.The real-time 3D reconstruction of the surrounding scene is a key part in the pipeline of the autonomous flight of unmanned aerial vehicle (UAV). The combination of an inertial measurement unit (IMU) and a monocular camera is a common and inexpensive sensor setup that can be used to recover the scale of the environment. This thesis aims to develop an algorithm solving this problem for this particular setup by leveraging the existing visual-inertial navigation system (VINS) odometry algorithms for localisation. Two algorithms are developed, wide-baseline matching-based and small-baseline tracking-based. Also, an offline post-processing structure-refinement step is implemented to further improve the resulting structure. The algorithms and the refinement step are then evaluated on publicly available datasets. Furthermore, they are tested in a simulator, and a real-world experiment is conducted. The results show that the tracking-based algorithm is significantly more performant. Importantly, tests on the datasets and the real-world experiments suggest that this algorithm can be practically employed in application scenarios

    EXPEDITIONARY LOGISTICS: A LOW-COST, DEPLOYABLE, UNMANNED AERIAL SYSTEM FOR AIRFIELD DAMAGE ASSESSMENT

    Get PDF
    Airfield Damage Repair (ADR) is among the most important expeditionary activities for our military. The goal of ADR is to restore a damaged airfield to operational status as quickly as possible. Before the process of ADR can begin, however, the damage to the airfield needs to be assessed. As a result, Airfield Damage Assessment (ADA) has received considerable attention. Often in a damaged airfield, there is an expectation of unexploded ordnance, which makes ADA a slow, difficult, and dangerous process. For this reason, it is best to make ADA completely unmanned and automated. Additionally, ADA needs to be executed as quickly as possible so that ADR can begin and the airfield restored to a usable condition. Among other modalities, tower-based monitoring and remote sensor systems are often used for ADA. There is now an opportunity to investigate the use of commercial-off-the-shelf, low-cost, automated sensor systems for automatic damage detection. By developing a combination of ground-based and Unmanned Aerial Vehicle sensor systems, we demonstrate the completion of ADA in a safe, efficient, and cost-effective manner.http://archive.org/details/expeditionarylog1094561346Outstanding ThesisLieutenant, United States NavyApproved for public release; distribution is unlimited

    Evaluating the Use of sUAS-Derived Imagery for Monitoring Flood Protection Infrastructure

    Get PDF
    In the US there are approximately 33,000 miles of levees. This includes 14,500 miles of levee systems associated with US Army Corps of Engineers programs and approximately 15,000 miles from other state and federal agencies. More than 14 million people live behind levees and associated flood prevention infrastructure. Monitoring and risk assessment are an on-going process, especially during times of flood conditions. The city of New Orleans was heavily impacted by Hurricane Katrina in 2005 by storm surges and intense rainfall. The impact of the hurricane was substantial enough to cause levee failure and I-wall toppling where many of the levees were breached and waters flooded the city. Subsidence and increasing population are likely to make flooding events more frequent and costly. As new technologies emerge, monitoring and risk assessment can benefit to increase community resiliency. In this research, I investigate the use of the structure from motion photogrammetric method to monitor positional changes in invariant objects such as levees, specifically, I-walls. This method uses conventional digital images from multiple view locations and angles by either a moving aerial platform or terrestrial photography. Using parallel coded software and accompanying hardware, 3D point clouds, digital surface models and orthophotos can be created. By providing comparisons of similar processing workflows with a variety of imaging acquisition criteria using commercially available unmanned aerial systems (UAS), we created multiple image sets of a simulated I-wall at various flight elevations, look angles, and effective overlap. The comparisons can be used for sensor selection and mission planning to improve the quality of the final product

    On Board Georeferencing Using FPGA-Based Optimized Second Order Polynomial Equation

    Get PDF
    For real-time monitoring of natural disasters, such as fire, volcano, flood, landslide, and coastal inundation, highly-accurate georeferenced remotely sensed imagery is needed. Georeferenced imagery can be fused with geographic spatial data sets to provide geographic coordinates and positing for regions of interest. This paper proposes an on-board georeferencing method for remotely sensed imagery, which contains five modules: input data, coordinate transformation, bilinear interpolation, and output data. The experimental results demonstrate multiple benefits of the proposed method: (1) the computation speed using the proposed algorithm is 8 times faster than that using PC computer; (2) the resources of the field programmable gate array (FPGA) can meet the requirements of design. In the coordinate transformation scheme, 250,656 LUTs, 499,268 registers, and 388 DSP48s are used. Furthermore, 27,218 LUTs, 45,823 registers, 456 RAM/FIFO, and 267 DSP48s are used in the bilinear interpolation module; (3) the values of root mean square errors (RMSEs) are less than one pixel, and the other statistics, such as maximum error, minimum error, and mean error are less than one pixel; (4) the gray values of the georeferenced image when implemented using FPGA have the same accuracy as those implemented using MATLAB and Visual studio (C++), and have a very close accuracy implemented using ENVI software; and (5) the on-chip power consumption is 0.659W. Therefore, it can be concluded that the proposed georeferencing method implemented using FPGA with second-order polynomial model and bilinear interpolation algorithm can achieve real-time geographic referencing for remotely sensed imagery

    Military Application of Aerial Photogrammetry Mapping Assisted by Small Unmanned Air Vehicles

    Get PDF
    This research investigated the practical military applications of the photogrammetric methods using remote sensing assisted by small unmanned aerial vehicles (SUAVs). The research explored the feasibility of UAV aerial mapping in terms of the specific military purposes, focusing on the geolocational and measurement accuracy of the digital models, and image processing time. The research method involved experimental flight tests using low-cost Commercial off-the-shelf (COTS) components, sensors and image processing tools to study key features of the method required in military like location accuracy, time estimation, and measurement capability. Based on the results of the data analysis, two military applications are defined to justify the feasibility and utility of the methods. The first application is to assess the damage of an attacked military airfield using photogrammetric digital models. Using a hex-rotor test platform with Sony A6000 camera, georeferenced maps with 1 meter accuracy was produced and with sufficient resolution (about 1 cm/pixel) to identify foreign objects on the runway. The other case examines the utility and quality of the targeting system using geo-spatial data from reconstructed 3-Dimensional (3-D) photogrammetry models. By analyzing 3-D model, operable targeting under 1meter accuracy with only 5 percent error on distance, area, and volume wer

    Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestry

    Get PDF
    Three-dimensional (3D) image mapping of real-world scenarios has a great potential to provide the user with a more accurate scene understanding. This will enable, among others, unsupervised automatic sampling of meaningful material classes from the target area for adaptive semi-supervised deep learning techniques. This path is already being taken by the recent and fast-developing research in computational fields, however, some issues related to computationally expensive processes in the integration of multi-source sensing data remain. Recent studies focused on Earth observation and characterization are enhanced by the proliferation of Unmanned Aerial Vehicles (UAV) and sensors able to capture massive datasets with a high spatial resolution. In this scope, many approaches have been presented for 3D modeling, remote sensing, image processing and mapping, and multi-source data fusion. This survey aims to present a summary of previous work according to the most relevant contributions for the reconstruction and analysis of 3D models of real scenarios using multispectral, thermal and hyperspectral imagery. Surveyed applications are focused on agriculture and forestry since these fields concentrate most applications and are widely studied. Many challenges are currently being overcome by recent methods based on the reconstruction of multi-sensorial 3D scenarios. In parallel, the processing of large image datasets has recently been accelerated by General-Purpose Graphics Processing Unit (GPGPU) approaches that are also summarized in this work. Finally, as a conclusion, some open issues and future research directions are presented.European Commission 1381202-GEU PYC20-RE-005-UJA IEG-2021Junta de Andalucia 1381202-GEU PYC20-RE-005-UJA IEG-2021Instituto de Estudios GiennesesEuropean CommissionSpanish Government UIDB/04033/2020DATI-Digital Agriculture TechnologiesPortuguese Foundation for Science and Technology 1381202-GEU FPU19/0010

    Low-cost UAV surveys of hurricane damage in Dominica: automated processing with co-registration of pre-hurricane imagery for change analysis

    Get PDF
    In 2017, hurricane Maria caused unprecedented damage and fatalities on the Caribbean island of Dominica. In order to ‘build back better’ and to learn from the processes causing the damage, it is important to quickly document, evaluate and map changes, both in Dominica and in other high-risk countries. This paper presents an innovative and relatively low-cost and rapid workflow for accurately quantifying geomorphological changes in the aftermath of a natural disaster. We used unmanned aerial vehicle (UAV) surveys to collect aerial imagery from 44 hurricane-affected key sites on Dominica. We processed the imagery using structure from motion (SfM) as well as a purpose-built Python script for automated processing, enabling rapid data turnaround. We also compared the data to an earlier UAV survey undertaken shortly before hurricane Maria and established ways to co-register the imagery, in order to provide accurate change detection data sets. Consequently, our approach has had to differ considerably from the previous studies that have assessed the accuracy of UAV-derived data in relatively undisturbed settings. This study therefore provides an original contribution to UAV-based research, outlining a robust aerial methodology that is potentially of great value to post-disaster damage surveys and geomorphological change analysis. Our findings can be used (1) to utilise UAV in post-disaster change assessments; (2) to establish ground control points that enable before-and-after change analysis; and (3) to provide baseline data reference points in areas that might undergo future change. We recommend that countries which are at high risk from natural disasters develop capacity for low-cost UAV surveys, building teams that can create pre-disaster baseline surveys, respond within a few hours of a local disaster event and provide aerial photography of use for the damage assessments carried out by local and incoming disaster response teams

    Autonomous High-Precision Landing on a Unmanned Surface Vehicle

    Get PDF
    THE MAIN GOAL OF THIS THESIS IS THE DEVELOPMENT OF AN AUTONOMOUS HIGH-PRECISION LANDING SYSTEM OF AN UAV IN AN AUTONOMOUS BOATIn this dissertation, a collaborative method for Multi Rotor Vertical Takeoff and Landing (MR-VTOL) Unmanned Aerial Vehicle (UAV)s’ autonomous landing is presented. The majority of common UAV autonomous landing systems adopt an approach in which the UAV scans the landing zone for a predetermined pattern, establishes relative positions, and uses those positions to execute the landing. These techniques have some shortcomings, such as extensive processing being carried out by the UAV itself and requires a lot of computational power. The fact that most of these techniques only work while the UAV is already flying at a low altitude, since the pattern’s elements must be plainly visible to the UAV’s camera, creates an additional issue. An RGB camera that is positioned in the landing zone and pointed up at the sky is the foundation of the methodology described throughout this dissertation. Convolutional Neural Networks and Inverse Kinematics approaches can be used to isolate and analyse the distinctive motion patterns the UAV presents because the sky is a very static and homogeneous environment. Following realtime visual analysis, a terrestrial or maritime robotic system can transmit orders to the UAV. The ultimate result is a model-free technique, or one that is not based on established patterns, that can help the UAV perform its landing manoeuvre. The method is trustworthy enough to be used independently or in conjunction with more established techniques to create a system that is more robust. The object detection neural network approach was able to detect the UAV in 91,57% of the assessed frames with a tracking error under 8%, according to experimental simulation findings derived from a dataset comprising three different films. Also created was a high-level position relative control system that makes use of the idea of an approach zone to the helipad. Every potential three-dimensional point within the zone corresponds to a UAV velocity command with a certain orientation and magnitude. The control system worked flawlessly to conduct the UAV’s landing within 6 cm of the target during testing in a simulated setting.Nesta dissertação, é apresentado um método de colaboração para a aterragem autónoma de Unmanned Aerial Vehicle (UAV)Multi Rotor Vertical Takeoff and Landing (MR-VTOL). A maioria dos sistemas de aterragem autónoma de UAV comuns adopta uma abordagem em que o UAV varre a zona de aterragem em busca de um padrão pré-determinado, estabelece posições relativas, e utiliza essas posições para executar a aterragem. Estas técnicas têm algumas deficiências, tais como o processamento extensivo a ser efectuado pelo próprio UAV e requer muita potência computacional. O facto de a maioria destas técnicas só funcionar enquanto o UAV já está a voar a baixa altitude, uma vez que os elementos do padrão devem ser claramente visíveis para a câmara do UAV, cria um problema adicional. Uma câmara RGB posicionada na zona de aterragem e apontada para o céu é a base da metodologia descrita ao longo desta dissertação. As Redes Neurais Convolucionais e as abordagens da Cinemática Inversa podem ser utilizadas para isolar e analisar os padrões de movimento distintos que o UAV apresenta, porque o céu é um ambiente muito estático e homogéneo. Após análise visual em tempo real, um sistema robótico terrestre ou marítimo pode transmitir ordens para o UAV. O resultado final é uma técnica sem modelo, ou que não se baseia em padrões estabelecidos, que pode ajudar o UAV a realizar a sua manobra de aterragem. O método é suficientemente fiável para ser utilizado independentemente ou em conjunto com técnicas mais estabelecidas para criar um sistema que seja mais robusto. A abordagem da rede neural de detecção de objectos foi capaz de detectar o UAV em 91,57% dos fotogramas avaliados com um erro de rastreio inferior a 8%, de acordo com resultados de simulação experimental derivados de um conjunto de dados composto por três filmes diferentes. Também foi criado um sistema de controlo relativo de posição de alto nível que faz uso da ideia de uma zona de aproximação ao heliporto. Cada ponto tridimensional potencial dentro da zona corresponde a um comando de velocidade do UAV com uma certa orientação e magnitude. O sistema de controlo funcionou sem falhas para conduzir a aterragem do UAV dentro de 6 cm do alvo durante os testes num cenário simulado. Traduzido com a versão gratuita do tradutor - www.DeepL.com/Translato

    Visual and Camera Sensors

    Get PDF
    This book includes 13 papers published in Special Issue ("Visual and Camera Sensors") of the journal Sensors. The goal of this Special Issue was to invite high-quality, state-of-the-art research papers dealing with challenging issues in visual and camera sensors
    corecore