685 research outputs found

    Optical Geolocation for Small Unmanned Aerial Systems

    Get PDF
    This paper presents an airborne optical geolocation system using four optical targets to provide position and attitude estimation for a sUAS supporting the NASA Acoustic Research Mission (ARM), where the goal is to reduce nuisance airframe noise during approach and landing. A large precision positioned microphone array captures the airframe noise for multiple passes of a Gulfstream III aircraft. For health monitoring of the microphone array, the Acoustic Calibration Vehicle (ACV) sUAS completes daily flights with an onboard speaker emitting tones at frequencies optimized for determining microphone functionality. An accurate position estimate of the ACV relative to the array is needed for microphone health monitoring. To this end, an optical geolocation system using a downward facing camera mounted to the ACV was developed. The 3D positioning of the ACV is computed using the pinhole camera model. A novel optical geolocation algorithm first detects the targets, then a recursive algorithm tightens the localization of the targets. Finally, the position of the sUAS is computed using the image coordinates of the targets, the 3D world coordinates of the targets, and the camera matrix. A Real-Time Kinematic GPS system is used to compare the optical geolocation system

    Military Application of Aerial Photogrammetry Mapping Assisted by Small Unmanned Air Vehicles

    Get PDF
    This research investigated the practical military applications of the photogrammetric methods using remote sensing assisted by small unmanned aerial vehicles (SUAVs). The research explored the feasibility of UAV aerial mapping in terms of the specific military purposes, focusing on the geolocational and measurement accuracy of the digital models, and image processing time. The research method involved experimental flight tests using low-cost Commercial off-the-shelf (COTS) components, sensors and image processing tools to study key features of the method required in military like location accuracy, time estimation, and measurement capability. Based on the results of the data analysis, two military applications are defined to justify the feasibility and utility of the methods. The first application is to assess the damage of an attacked military airfield using photogrammetric digital models. Using a hex-rotor test platform with Sony A6000 camera, georeferenced maps with 1 meter accuracy was produced and with sufficient resolution (about 1 cm/pixel) to identify foreign objects on the runway. The other case examines the utility and quality of the targeting system using geo-spatial data from reconstructed 3-Dimensional (3-D) photogrammetry models. By analyzing 3-D model, operable targeting under 1meter accuracy with only 5 percent error on distance, area, and volume wer

    Unmanned Aerial Systems for Wildland and Forest Fires

    Full text link
    Wildfires represent an important natural risk causing economic losses, human death and important environmental damage. In recent years, we witness an increase in fire intensity and frequency. Research has been conducted towards the development of dedicated solutions for wildland and forest fire assistance and fighting. Systems were proposed for the remote detection and tracking of fires. These systems have shown improvements in the area of efficient data collection and fire characterization within small scale environments. However, wildfires cover large areas making some of the proposed ground-based systems unsuitable for optimal coverage. To tackle this limitation, Unmanned Aerial Systems (UAS) were proposed. UAS have proven to be useful due to their maneuverability, allowing for the implementation of remote sensing, allocation strategies and task planning. They can provide a low-cost alternative for the prevention, detection and real-time support of firefighting. In this paper we review previous work related to the use of UAS in wildfires. Onboard sensor instruments, fire perception algorithms and coordination strategies are considered. In addition, we present some of the recent frameworks proposing the use of both aerial vehicles and Unmanned Ground Vehicles (UV) for a more efficient wildland firefighting strategy at a larger scale.Comment: A recent published version of this paper is available at: https://doi.org/10.3390/drones501001

    AgriColMap: Aerial-Ground Collaborative 3D Mapping for Precision Farming

    Full text link
    The combination of aerial survey capabilities of Unmanned Aerial Vehicles with targeted intervention abilities of agricultural Unmanned Ground Vehicles can significantly improve the effectiveness of robotic systems applied to precision agriculture. In this context, building and updating a common map of the field is an essential but challenging task. The maps built using robots of different types show differences in size, resolution and scale, the associated geolocation data may be inaccurate and biased, while the repetitiveness of both visual appearance and geometric structures found within agricultural contexts render classical map merging techniques ineffective. In this paper we propose AgriColMap, a novel map registration pipeline that leverages a grid-based multimodal environment representation which includes a vegetation index map and a Digital Surface Model. We cast the data association problem between maps built from UAVs and UGVs as a multimodal, large displacement dense optical flow estimation. The dominant, coherent flows, selected using a voting scheme, are used as point-to-point correspondences to infer a preliminary non-rigid alignment between the maps. A final refinement is then performed, by exploiting only meaningful parts of the registered maps. We evaluate our system using real world data for 3 fields with different crop species. The results show that our method outperforms several state of the art map registration and matching techniques by a large margin, and has a higher tolerance to large initial misalignments. We release an implementation of the proposed approach along with the acquired datasets with this paper.Comment: Published in IEEE Robotics and Automation Letters, 201

    ๋ฌด์ธ๋น„ํ–‰์ฒด ํƒ‘์žฌ ์—ดํ™”์ƒ ๋ฐ ์‹คํ™”์ƒ ์ด๋ฏธ์ง€๋ฅผ ํ™œ์šฉํ•œ ์•ผ์ƒ๋™๋ฌผ ํƒ์ง€ ๊ฐ€๋Šฅ์„ฑ ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ํ™˜๊ฒฝ๋Œ€ํ•™์› ํ™˜๊ฒฝ์กฐ๊ฒฝํ•™๊ณผ, 2022.2. ์†ก์˜๊ทผ.์•ผ์ƒ๋™๋ฌผ์˜ ํƒ์ง€์™€ ๋ชจ๋‹ˆํ„ฐ๋ง์„ ์œ„ํ•ด, ํ˜„์žฅ ์ง์ ‘ ๊ด€์ฐฐ, ํฌํš-์žฌํฌํš๊ณผ ๊ฐ™์€ ์ „ํ†ต์  ์กฐ์‚ฌ ๋ฐฉ๋ฒ•์ด ๋‹ค์–‘ํ•œ ๋ชฉ์ ์œผ๋กœ ์ˆ˜ํ–‰๋˜์–ด์™”๋‹ค. ํ•˜์ง€๋งŒ, ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•๋“ค์€ ๋งŽ์€ ์‹œ๊ฐ„๊ณผ ์ƒ๋Œ€์ ์œผ๋กœ ๋น„์‹ผ ๋น„์šฉ์ด ํ•„์š”ํ•˜๋ฉฐ, ์‹ ๋ขฐ ๊ฐ€๋Šฅํ•œ ํƒ์ง€ ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ธฐ ์œ„ํ•ด์„  ์ˆ™๋ จ๋œ ํ˜„์žฅ ์ „๋ฌธ๊ฐ€๊ฐ€ ํ•„์š”ํ•˜๋‹ค. ๊ฒŒ๋‹ค๊ฐ€, ์ „ํ†ต์ ์ธ ํ˜„์žฅ ์กฐ์‚ฌ ๋ฐฉ๋ฒ•์€ ํ˜„์žฅ์—์„œ ์•ผ์ƒ๋™๋ฌผ์„ ๋งˆ์ฃผ์น˜๋Š” ๋“ฑ ์œ„ํ—˜ํ•œ ์ƒํ™ฉ์— ์ฒ˜ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด์— ๋”ฐ๋ผ, ์นด๋ฉ”๋ผ ํŠธ๋ž˜ํ•‘, GPS ์ถ”์ , eDNA ์ƒ˜ํ”Œ๋ง๊ณผ ๊ฐ™์€ ์›๊ฒฉ ์กฐ์‚ฌ ๋ฐฉ๋ฒ•์ด ๊ธฐ์กด์˜ ์ „ํ†ต์  ์กฐ์‚ฌ๋ฐฉ๋ฒ•์„ ๋Œ€์ฒดํ•˜๋ฉฐ ๋”์šฑ ๋นˆ๋ฒˆํžˆ ์‚ฌ์šฉ๋˜๊ธฐ ์‹œ์ž‘ํ–ˆ๋‹ค. ํ•˜์ง€๋งŒ, ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•๋“ค์€ ์—ฌ์ „ํžˆ ๋ชฉํ‘œ๋กœ ํ•˜๋Š” ๋Œ€์ƒ์˜ ์ „์ฒด ๋ฉด์ ๊ณผ, ๊ฐœ๋ณ„ ๊ฐœ์ฒด๋ฅผ ํƒ์ง€ํ•  ์ˆ˜ ์—†๋‹ค๋Š” ํ•œ๊ณ„๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ•œ๊ณ„๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด, ๋ฌด์ธ๋น„ํ–‰์ฒด (UAV, Unmanned Aerial Vehicle)๊ฐ€ ์•ผ์ƒ๋™๋ฌผ ํƒ์ง€์˜ ๋Œ€์ค‘์ ์ธ ๋„๊ตฌ๋กœ ์ž๋ฆฌ๋งค๊น€ํ•˜๊ณ  ์žˆ๋‹ค. UAV์˜ ๊ฐ€์žฅ ํฐ ์žฅ์ ์€, ์„ ๋ช…ํ•˜๊ณ  ์ด˜์ด˜ํ•œ ๊ณต๊ฐ„ ๋ฐ ์‹œ๊ฐ„ํ•ด์ƒ๋„์™€ ํ•จ๊ป˜ ์ „์ฒด ์—ฐ๊ตฌ ์ง€์—ญ์— ๋Œ€ํ•œ ๋™๋ฌผ ํƒ์ง€๊ฐ€ ๊ฐ€๋Šฅํ•˜๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. ์ด์— ๋”ํ•ด, UAV๋ฅผ ์‚ฌ์šฉํ•จ์œผ๋กœ์จ, ์ ‘๊ทผํ•˜๊ธฐ ์–ด๋ ค์šด ์ง€์—ญ์ด๋‚˜ ์œ„ํ—˜ํ•œ ๊ณณ์— ๋Œ€ํ•œ ์กฐ์‚ฌ๊ฐ€ ๊ฐ€๋Šฅํ•ด์ง„๋‹ค. ํ•˜์ง€๋งŒ, ์ด๋Ÿฌํ•œ ์ด์  ์™ธ์—, UAV์˜ ๋‹จ์ ๋„ ๋ช…ํ™•ํžˆ ์กด์žฌํ•œ๋‹ค. ๋Œ€์ƒ์ง€, ๋น„ํ–‰ ์†๋„ ๋ฐ ๋†’์ด ๋“ฑ๊ณผ ๊ฐ™์ด UAV๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ํ™˜๊ฒฝ์— ๋”ฐ๋ผ, ์ž‘์€ ๋™๋ฌผ, ์šธ์ฐฝํ•œ ์ˆฒ์†์— ์žˆ๋Š” ๊ฐœ์ฒด, ๋น ๋ฅด๊ฒŒ ์›€์ง์ด๋Š” ๋™๋ฌผ์„ ํƒ์ง€ํ•˜๋Š” ๊ฒƒ์ด ์ œํ•œ๋œ๋‹ค. ๋˜ํ•œ, ๊ธฐ์ƒํ™˜๊ฒฝ์— ๋”ฐ๋ผ์„œ๋„ ๋น„ํ–‰์ด ๋ถˆ๊ฐ€ํ•  ์ˆ˜ ์žˆ๊ณ , ๋ฐฐํ„ฐ๋ฆฌ ์šฉ๋Ÿ‰์œผ๋กœ ์ธํ•œ ๋น„ํ–‰์‹œ๊ฐ„์˜ ์ œํ•œ๋„ ์กด์žฌํ•œ๋‹ค. ํ•˜์ง€๋งŒ, ์ •๋ฐ€ํ•œ ํƒ์ง€๊ฐ€ ๋ถˆ๊ฐ€๋Šฅํ•˜๋”๋ผ๋„, ์ด์™€ ๊ด€๋ จ ์—ฐ๊ตฌ๊ฐ€ ๊พธ์ค€ํžˆ ์ˆ˜ํ–‰๋˜๊ณ  ์žˆ์œผ๋ฉฐ, ์„ ํ–‰์—ฐ๊ตฌ๋“ค์€ ์œก์ƒ ๋ฐ ํ•ด์ƒ ํฌ์œ ๋ฅ˜, ์กฐ๋ฅ˜, ๊ทธ๋ฆฌ๊ณ  ํŒŒ์ถฉ๋ฅ˜ ๋“ฑ์„ ํƒ์ง€ํ•˜๋Š” ๋ฐ์— ์„ฑ๊ณตํ•˜์˜€๋‹ค. UAV๋ฅผ ํ†ตํ•ด ์–ป์–ด์ง€๋Š” ๊ฐ€์žฅ ๋Œ€ํ‘œ์ ์ธ ๋ฐ์ดํ„ฐ๋Š” ์‹คํ™”์ƒ ์ด๋ฏธ์ง€์ด๋‹ค. ์ด๋ฅผ ์‚ฌ์šฉํ•ด ๋จธ์‹ ๋Ÿฌ๋‹ ๋ฐ ๋”ฅ๋Ÿฌ๋‹ (ML-DL, Machine Learning and Deep Learning) ๋ฐฉ๋ฒ•์ด ์ฃผ๋กœ ์‚ฌ์šฉ๋˜๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•์€ ์ƒ๋Œ€์ ์œผ๋กœ ์ •ํ™•ํ•œ ํƒ์ง€ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์—ฌ์ฃผ์ง€๋งŒ, ํŠน์ • ์ข…์„ ํƒ์ง€ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ์˜ ๊ฐœ๋ฐœ์„ ์œ„ํ•ด์„  ์ตœ์†Œํ•œ ์ฒœ ์žฅ์˜ ์ด๋ฏธ์ง€๊ฐ€ ํ•„์š”ํ•˜๋‹ค. ์‹คํ™”์ƒ ์ด๋ฏธ์ง€ ์™ธ์—๋„, ์—ดํ™”์ƒ ์ด๋ฏธ์ง€ ๋˜ํ•œ UAV๋ฅผ ํ†ตํ•ด ํš๋“ ๋  ์ˆ˜ ์žˆ๋‹ค. ์—ดํ™”์ƒ ์„ผ์„œ ๊ธฐ์ˆ ์˜ ๊ฐœ๋ฐœ๊ณผ ์„ผ์„œ ๊ฐ€๊ฒฉ์˜ ํ•˜๋ฝ์€ ๋งŽ์€ ์•ผ์ƒ๋™๋ฌผ ์—ฐ๊ตฌ์ž๋“ค์˜ ๊ด€์‹ฌ์„ ์‚ฌ๋กœ์žก์•˜๋‹ค. ์—ดํ™”์ƒ ์นด๋ฉ”๋ผ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋™๋ฌผ์˜ ์ฒด์˜จ๊ณผ ์ฃผ๋ณ€ํ™˜๊ฒฝ๊ณผ์˜ ์˜จ๋„ ์ฐจ์ด๋ฅผ ํ†ตํ•ด ์ •์˜จ๋™๋ฌผ์„ ํƒ์ง€ํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€๋Šฅํ•˜๋‹ค. ํ•˜์ง€๋งŒ, ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๊ฐ€ ์‚ฌ์šฉ๋˜๋”๋ผ๋„, ์—ฌ์ „ํžˆ ML-DL ๋ฐฉ๋ฒ•์ด ๋™๋ฌผ ํƒ์ง€์— ์ฃผ๋กœ ์‚ฌ์šฉ๋˜๊ณ  ์žˆ์œผ๋ฉฐ, ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•์€ UAV๋ฅผ ํ™œ์šฉํ•œ ์•ผ์ƒ๋™๋ฌผ์˜ ์‹ค์‹œ๊ฐ„ ํƒ์ง€๋ฅผ ์ œํ•œํ•œ๋‹ค. ๋”ฐ๋ผ์„œ, ๋ณธ ์—ฐ๊ตฌ๋Š” ์—ดํ™”์ƒ๊ณผ ์‹คํ™”์ƒ ์ด๋ฏธ์ง€๋ฅผ ํ™œ์šฉํ•œ ๋™๋ฌผ ์ž๋™ ํƒ์ง€ ๋ฐฉ๋ฒ•์˜ ๊ฐœ๋ฐœ๊ณผ, ๊ฐœ๋ฐœ๋œ ๋ฐฉ๋ฒ•์ด ์ด์ „ ๋ฐฉ๋ฒ•๋“ค์˜ ํ‰๊ท  ์ด์ƒ์˜ ์ •ํ™•๋„์™€ ํ•จ๊ป˜ ํ˜„์žฅ์—์„œ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•œ๋‹ค.For wildlife detection and monitoring, traditional methods such as direct observation and capture-recapture have been carried out for diverse purposes. However, these methods require a large amount of time, considerable expense, and field-skilled experts to obtain reliable results. Furthermore, performing a traditional field survey can result in dangerous situations, such as an encounter with wild animals. Remote monitoring methods, such as those based on camera trapping, GPS collars, and environmental DNA sampling, have been used more frequently, mostly replacing traditional survey methods, as the technologies have developed. But these methods still have limitations, such as the lack of ability to cover an entire region or detect individual targets. To overcome those limitations, the unmanned aerial vehicle (UAV) is becoming a popular tool for conducting a wildlife census. The main benefits of UAVs are able to detect animals remotely covering a wider region with clear and fine spatial and temporal resolutions. In addition, by operating UAVs investigate hard to access or dangerous areas become possible. However, besides these advantages, the limitations of UAVs clearly exist. By UAV operating environments such as study site, flying height or speed, the ability to detect small animals, targets in the dense forest, tracking fast-moving animals can be limited. And by the weather, operating UAV is unable, and the flight time is limited by the battery matters. Although detailed detection is unavailable, related researches are developing and previous studies used UAV to detect terrestrial and marine mammals, avian and reptile species. The most common type of data acquired by UAVs is RGB images. Using these images, machine-learning and deep-learning (MLโ€“DL) methods were mainly used for wildlife detection. MLโ€“DL methods provide relatively accurate results, but at least 1,000 images are required to develop a proper detection model for specific species. Instead of RGB images, thermal images can be acquired by a UAV. The development of thermal sensor technology and sensor price reduction has attracted the interest of wildlife researchers. Using a thermal camera, homeothermic animals can be detected based on the temperature difference between their bodies and the surrounding environment. Although the technology and data are new, the same MLโ€“DL methods were typically used for animal detection. These ML-DL methods limit the use of UAVs for real-time wildlife detection in the field. Therefore, this paper aims to develop an automated animal detection method with thermal and RGB image datasets and to utilize it under in situ conditions in real-time while ensuring the average-above detection ability of previous methods.Abstract I Contents IV List of Tables VII List of Figures VIII Chapter 1. Introduction 1 1.1 Research background 1 1.2 Research goals and objectives 10 1.2.1 Research goals 10 1.2.2 Research objectives 11 1.3 Theoretical background 13 1.3.1 Concept of the UAV 13 1.3.2 Concept of the thermal camera 13 Chapter 2. Methods 15 2.1 Study site 15 2.2 Data acquisition and preprocessing 16 2.2.1 Data acquisition 16 2.2.2 RGB lens distortion correction and clipping 19 2.2.3 Thermal image correction by fur color 21 2.2.4 Unnatural object removal 22 2.3 Animal detection 24 2.3.1 Sobel edge creation and contour generation 24 2.3.2 Object detection and sorting 26 Chapter 3. Results 30 3.1 Number of counted objects 31 3.2 Time costs of image types 33 Chapter 4. Discussion 36 4.1 Reference comparison 36 4.2 Instant detection 40 4.3 Supplemental usage 41 4.4 Utility of thermal sensors 42 4.5 Applications in other fields 43 Chapter 5. Conclusions 47 References 49 Appendix: Glossary 61 ์ดˆ๋ก 62์„

    Fast Obstacle Detection System for UAS Based on Complementary Use of Radar and Stereoscopic Camera

    Get PDF
    Autonomous unmanned aerial systems (UAS) are having an increasing impact in the scientific community. One of the most challenging problems in this research area is the design of robust real-time obstacle detection and avoidance systems. In the automotive field, applications of obstacle detection systems combining radar and vision sensors are common and widely documented. However, these technologies are not currently employed in the UAS field due to the major complexity of the flight scenario, especially in urban environments. In this paper, a real-time obstacle-detection system based on the use of a 77 GHz radar and a stereoscopic camera is proposed for use in small UASs. The resulting system is capable of detecting obstacles in a broad spectrum of environmental conditions. In particular, the vision system guarantees a high resolution for short distances, while the radar has a lower resolution but can cover greater distances, being insensitive to poor lighting conditions. The developed hardware and software architecture and the related obstacle-detection algorithm are illustrated within the European project AURORA. Experimental results carried out employing a small UAS show the effectiveness of the obstacle detection system and of a simple avoidance strategy during several autonomous missions on a test site

    Risk driven models & security framework for drone operation in GNSS-denied environments

    Get PDF
    Flying machines in the air without human inhabitation has moved from abstracts to reality and the concept of unmanned aerial vehicles continues to evolve. Drones are popularly known to use GPS and other forms of GNSS for navigation, but this has unfortunately opened them up to spoofing and other forms of cybersecurity threats. The use of computer vision to find location through pre-stored satellite images has become a suggested solution but this gives rise to security challenges in the form of spoofing, tampering, denial of service and other forms of attacks. These security challenges are reviewed with appropriate requirements recommended. This research uses the STRIDE threat analysis model to analyse threats in drone operation in GNSS-denied environment. Other threat models were considered including DREAD and PASTA, but STRIDE is chosen because of its suitability and the complementary ability it serves to other analytical methods used in this work. Research work is taken further to divide the drone system into units based in similarities in functions and architecture. They are then subjected to Failure Mode and Effects Analysis (FMEA), and Fault Tree Analysis (FTA). The STRIDE threat model is used as base events for the FTA and an FMEA is conducted based on adaptations from IEC 62443-1-1, Network and System Security- Terminology, concepts, and models and IEC 62443-3-2, security risk assessment for system design. The FTA and FMEA are widely known for functional safety purposes but there is a divergent use for the tools where we consider cybersecurity vulnerabilities specifically, instead of faults. The IEC 62443 series has become synonymous with Industrial Automation and Control Systems. However, inspiration is drawn from that series for this work because, drones, as much as any technological gadget in play recently, falls under a growing umbrella of quickly evolving devices, known as Internet of Things (IoT). These IoT devices can be principally considered as part of Industrial Automation and Control Systems. Results from the analysis are used to recommend security standards & requirements that can be applied in drone operation in GNSS-denied environments. The framework recommended in this research is consistent with IEC 62443-3-3, System security requirements and security levels and has the following categorization from IEC 62443-1-1, identification, and authentication control, use control, system integrity, data confidentiality, restricted data flow, timely response to events and resource availability. The recommended framework is applicable and relevant to military, private and commercial drone deployment because the framework can be adapted and further tweaked to suit the context which it is intended for. Application of this framework in drone operation in GNSS denied environment will greatly improve upon the cyber resilience of the drone network system

    Convolutional Neural Networks accurately predict cover fractions of plant species and communities in Unmanned Aerial Vehicle imagery

    Get PDF
    Unmanned Aerial Vehicles (UAV) greatly extended our possibilities to acquire high resolution remote sensing data for assessing the spatial distribution of species composition and vegetation characteristics. Yet, current pixelโ€ or textureโ€based mapping approaches do not fully exploit the information content provided by the high spatial resolution. Here, to fully harness this spatial detail, we apply deep learning techniques, that is, Convolutional Neural Networks (CNNs), on regular tiles of UAVโ€orthoimagery (here 2โ€“5 m) to identify the cover of target plant species and plant communities. The approach was tested with UAVโ€based orthomosaics and photogrammetric 3D information in three case studies, that is, (1) mapping tree species cover in primary forests, (2) mapping plant invasions by woody species into forests and open land and (3) mapping vegetation succession in a glacier foreland. All three case studies resulted in high predictive accuracies. The accuracy increased with increasing tile size (2โ€“5 m) reflecting the increased spatial context captured by a tile. The inclusion of 3D information derived from the photogrammetric workflow did not significantly improve the models. We conclude that CNN are powerful in harnessing high resolution data acquired from UAV to map vegetation patterns. The study was based on low cost red, green, blue (RGB) sensors making the method accessible to a wide range of users. Combining UAV and CNN will provide tremendous opportunities for ecological applications
    • โ€ฆ
    corecore