1,221 research outputs found
Unmanned Aerial Vehicles (UAVs) in environmental biology: A Review
Acquiring information about the environment is a key step during each study in the field of environmental biology at different levels, from an individual species to community and biome. However, obtaining information about the environment is frequently difficult because of, for example, the phenological timing, spatial distribution of a species or limited accessibility of a particular area for the field survey. Moreover, remote sensing technology, which enables the observation of the Earthβs surface and is currently very common in environmental research, has many limitations such as insufficient spatial, spectral and temporal resolution and a high cost of data acquisition. Since the 1990s, researchers have been exploring the potential of different types of unmanned aerial vehicles (UAVs) for monitoring Earthβs surface. The present study reviews recent scientific literature dealing with the use of UAV in environmental biology. Amongst numerous papers, short communications and conference abstracts, we selected 110 original studies of how UAVs can be used in environmental biology and which organisms can be studied in this manner. Most of these studies concerned the use of UAV to measure the vegetation parameters such as crown height, volume, number of individuals (14 studies) and quantification of the spatio-temporal dynamics of vegetation changes (12 studies). UAVs were also frequently applied to count birds and mammals, especially those living in the water. Generally, the analytical part of the present study was divided into following sections: (1) detecting, assessing and predicting threats on vegetation, (2) measuring the biophysical parameters of vegetation, (3) quantifying the dynamics of changes in plants and habitats and (4) population and behaviour studies of animals. At the end, we also synthesised all the information showing, amongst others, the advances in environmental biology because of UAV application. Considering that 33% of studies found and included in this review were published in 2017 and 2018, it is expected that the number and variety of applications of UAVs in environmental biology will increase in the future
무μΈλΉν체 νμ¬ μ΄νμ λ° μ€νμ μ΄λ―Έμ§λ₯Ό νμ©ν μΌμλλ¬Ό νμ§ κ°λ₯μ± μ°κ΅¬
νμλ
Όλ¬Έ(μμ¬) -- μμΈλνκ΅λνμ : νκ²½λνμ νκ²½μ‘°κ²½νκ³Ό, 2022.2. μ‘μκ·Ό.μΌμλλ¬Όμ νμ§μ λͺ¨λν°λ§μ μν΄, νμ₯ μ§μ κ΄μ°°, ν¬ν-μ¬ν¬νκ³Ό κ°μ μ ν΅μ μ‘°μ¬ λ°©λ²μ΄ λ€μν λͺ©μ μΌλ‘ μνλμ΄μλ€. νμ§λ§, μ΄λ¬ν λ°©λ²λ€μ λ§μ μκ°κ³Ό μλμ μΌλ‘ λΉμΌ λΉμ©μ΄ νμνλ©°, μ λ’° κ°λ₯ν νμ§ κ²°κ³Όλ₯Ό μ»κΈ° μν΄μ μλ ¨λ νμ₯ μ λ¬Έκ°κ° νμνλ€. κ²λ€κ°, μ ν΅μ μΈ νμ₯ μ‘°μ¬ λ°©λ²μ νμ₯μμ μΌμλλ¬Όμ λ§μ£ΌμΉλ λ± μνν μν©μ μ²ν μ μλ€. μ΄μ λ°λΌ, μΉ΄λ©λΌ νΈλν, GPS μΆμ , eDNA μνλ§κ³Ό κ°μ μ격 μ‘°μ¬ λ°©λ²μ΄ κΈ°μ‘΄μ μ ν΅μ μ‘°μ¬λ°©λ²μ λ체νλ©° λμ± λΉλ²ν μ¬μ©λκΈ° μμνλ€. νμ§λ§, μ΄λ¬ν λ°©λ²λ€μ μ¬μ ν λͺ©νλ‘ νλ λμμ μ 체 λ©΄μ κ³Ό, κ°λ³ κ°μ²΄λ₯Ό νμ§ν μ μλ€λ νκ³λ₯Ό κ°μ§κ³ μλ€.
μ΄λ¬ν νκ³λ₯Ό 극볡νκΈ° μν΄, 무μΈλΉν체 (UAV, Unmanned Aerial Vehicle)κ° μΌμλλ¬Ό νμ§μ λμ€μ μΈ λκ΅¬λ‘ μ리맀κΉνκ³ μλ€. UAVμ κ°μ₯ ν° μ₯μ μ, μ λͺ
νκ³ μ΄μ΄ν κ³΅κ° λ° μκ°ν΄μλμ ν¨κ» μ 체 μ°κ΅¬ μ§μμ λν λλ¬Ό νμ§κ° κ°λ₯νλ€λ κ²μ΄λ€. μ΄μ λν΄, UAVλ₯Ό μ¬μ©ν¨μΌλ‘μ¨, μ κ·ΌνκΈ° μ΄λ €μ΄ μ§μμ΄λ μνν κ³³μ λν μ‘°μ¬κ° κ°λ₯ν΄μ§λ€. νμ§λ§, μ΄λ¬ν μ΄μ μΈμ, UAVμ λ¨μ λ λͺ
νν μ‘΄μ¬νλ€. λμμ§, λΉν μλ λ° λμ΄ λ±κ³Ό κ°μ΄ UAVλ₯Ό μ¬μ©νλ νκ²½μ λ°λΌ, μμ λλ¬Ό, μΈμ°½ν μ²μμ μλ κ°μ²΄, λΉ λ₯΄κ² μμ§μ΄λ λλ¬Όμ νμ§νλ κ²μ΄ μ νλλ€. λν, κΈ°μνκ²½μ λ°λΌμλ λΉνμ΄ λΆκ°ν μ μκ³ , λ°°ν°λ¦¬ μ©λμΌλ‘ μΈν λΉνμκ°μ μ νλ μ‘΄μ¬νλ€. νμ§λ§, μ λ°ν νμ§κ° λΆκ°λ₯νλλΌλ, μ΄μ κ΄λ ¨ μ°κ΅¬κ° κΎΈμ€ν μνλκ³ μμΌλ©°, μ νμ°κ΅¬λ€μ μ‘μ λ° ν΄μ ν¬μ λ₯, μ‘°λ₯, κ·Έλ¦¬κ³ νμΆ©λ₯ λ±μ νμ§νλ λ°μ μ±κ³΅νμλ€.
UAVλ₯Ό ν΅ν΄ μ»μ΄μ§λ κ°μ₯ λνμ μΈ λ°μ΄ν°λ μ€νμ μ΄λ―Έμ§μ΄λ€. μ΄λ₯Ό μ¬μ©ν΄ λ¨Έμ λ¬λ λ° λ₯λ¬λ (ML-DL, Machine Learning and Deep Learning) λ°©λ²μ΄ μ£Όλ‘ μ¬μ©λκ³ μλ€. μ΄λ¬ν λ°©λ²μ μλμ μΌλ‘ μ νν νμ§ κ²°κ³Όλ₯Ό 보μ¬μ£Όμ§λ§, νΉμ μ’
μ νμ§ν μ μλ λͺ¨λΈμ κ°λ°μ μν΄μ μ΅μν μ² μ₯μ μ΄λ―Έμ§κ° νμνλ€. μ€νμ μ΄λ―Έμ§ μΈμλ, μ΄νμ μ΄λ―Έμ§ λν UAVλ₯Ό ν΅ν΄ νλ λ μ μλ€. μ΄νμ μΌμ κΈ°μ μ κ°λ°κ³Ό μΌμ κ°κ²©μ νλ½μ λ§μ μΌμλλ¬Ό μ°κ΅¬μλ€μ κ΄μ¬μ μ¬λ‘μ‘μλ€. μ΄νμ μΉ΄λ©λΌλ₯Ό μ¬μ©νλ©΄ λλ¬Όμ 체μ¨κ³Ό μ£Όλ³νκ²½κ³Όμ μ¨λ μ°¨μ΄λ₯Ό ν΅ν΄ μ μ¨λλ¬Όμ νμ§νλ κ²μ΄ κ°λ₯νλ€. νμ§λ§, μλ‘μ΄ λ°μ΄ν°κ° μ¬μ©λλλΌλ, μ¬μ ν ML-DL λ°©λ²μ΄ λλ¬Ό νμ§μ μ£Όλ‘ μ¬μ©λκ³ μμΌλ©°, μ΄λ¬ν λ°©λ²μ UAVλ₯Ό νμ©ν μΌμλλ¬Όμ μ€μκ° νμ§λ₯Ό μ ννλ€.
λ°λΌμ, λ³Έ μ°κ΅¬λ μ΄νμκ³Ό μ€νμ μ΄λ―Έμ§λ₯Ό νμ©ν λλ¬Ό μλ νμ§ λ°©λ²μ κ°λ°κ³Ό, κ°λ°λ λ°©λ²μ΄ μ΄μ λ°©λ²λ€μ νκ· μ΄μμ μ νλμ ν¨κ» νμ₯μμ μ€μκ°μΌλ‘ μ¬μ©λ μ μλλ‘ νλ κ²μ λͺ©νλ‘ νλ€.For wildlife detection and monitoring, traditional methods such as direct observation and capture-recapture have been carried out for diverse purposes. However, these methods require a large amount of time, considerable expense, and field-skilled experts to obtain reliable results. Furthermore, performing a traditional field survey can result in dangerous situations, such as an encounter with wild animals. Remote monitoring methods, such as those based on camera trapping, GPS collars, and environmental DNA sampling, have been used more frequently, mostly replacing traditional survey methods, as the technologies have developed. But these methods still have limitations, such as the lack of ability to cover an entire region or detect individual targets.
To overcome those limitations, the unmanned aerial vehicle (UAV) is becoming a popular tool for conducting a wildlife census. The main benefits of UAVs are able to detect animals remotely covering a wider region with clear and fine spatial and temporal resolutions. In addition, by operating UAVs investigate hard to access or dangerous areas become possible. However, besides these advantages, the limitations of UAVs clearly exist. By UAV operating environments such as study site, flying height or speed, the ability to detect small animals, targets in the dense forest, tracking fast-moving animals can be limited. And by the weather, operating UAV is unable, and the flight time is limited by the battery matters. Although detailed detection is unavailable, related researches are developing and previous studies used UAV to detect terrestrial and marine mammals, avian and reptile species.
The most common type of data acquired by UAVs is RGB images. Using these images, machine-learning and deep-learning (MLβDL) methods were mainly used for wildlife detection. MLβDL methods provide relatively accurate results, but at least 1,000 images are required to develop a proper detection model for specific species. Instead of RGB images, thermal images can be acquired by a UAV. The development of thermal sensor technology and sensor price reduction has attracted the interest of wildlife researchers. Using a thermal camera, homeothermic animals can be detected based on the temperature difference between their bodies and the surrounding environment. Although the technology and data are new, the same MLβDL methods were typically used for animal detection. These ML-DL methods limit the use of UAVs for real-time wildlife detection in the field.
Therefore, this paper aims to develop an automated animal detection method with thermal and RGB image datasets and to utilize it under in situ conditions in real-time while ensuring the average-above detection ability of previous methods.Abstract I
Contents IV
List of Tables VII
List of Figures VIII
Chapter 1. Introduction 1
1.1 Research background 1
1.2 Research goals and objectives 10
1.2.1 Research goals 10
1.2.2 Research objectives 11
1.3 Theoretical background 13
1.3.1 Concept of the UAV 13
1.3.2 Concept of the thermal camera 13
Chapter 2. Methods 15
2.1 Study site 15
2.2 Data acquisition and preprocessing 16
2.2.1 Data acquisition 16
2.2.2 RGB lens distortion correction and clipping 19
2.2.3 Thermal image correction by fur color 21
2.2.4 Unnatural object removal 22
2.3 Animal detection 24
2.3.1 Sobel edge creation and contour generation 24
2.3.2 Object detection and sorting 26
Chapter 3. Results 30
3.1 Number of counted objects 31
3.2 Time costs of image types 33
Chapter 4. Discussion 36
4.1 Reference comparison 36
4.2 Instant detection 40
4.3 Supplemental usage 41
4.4 Utility of thermal sensors 42
4.5 Applications in other fields 43
Chapter 5. Conclusions 47
References 49
Appendix: Glossary 61
μ΄λ‘ 62μ
A Comprehensive Review on Computer Vision Analysis of Aerial Data
With the emergence of new technologies in the field of airborne platforms and
imaging sensors, aerial data analysis is becoming very popular, capitalizing on
its advantages over land data. This paper presents a comprehensive review of
the computer vision tasks within the domain of aerial data analysis. While
addressing fundamental aspects such as object detection and tracking, the
primary focus is on pivotal tasks like change detection, object segmentation,
and scene-level analysis. The paper provides the comparison of various hyper
parameters employed across diverse architectures and tasks. A substantial
section is dedicated to an in-depth discussion on libraries, their
categorization, and their relevance to different domain expertise. The paper
encompasses aerial datasets, the architectural nuances adopted, and the
evaluation metrics associated with all the tasks in aerial data analysis.
Applications of computer vision tasks in aerial data across different domains
are explored, with case studies providing further insights. The paper
thoroughly examines the challenges inherent in aerial data analysis, offering
practical solutions. Additionally, unresolved issues of significance are
identified, paving the way for future research directions in the field of
aerial data analysis.Comment: 112 page
Small-Object Detection in Remote Sensing Images with End-to-End Edge-Enhanced GAN and Object Detector Network
The detection performance of small objects in remote sensing images is not
satisfactory compared to large objects, especially in low-resolution and noisy
images. A generative adversarial network (GAN)-based model called enhanced
super-resolution GAN (ESRGAN) shows remarkable image enhancement performance,
but reconstructed images miss high-frequency edge information. Therefore,
object detection performance degrades for small objects on recovered noisy and
low-resolution remote sensing images. Inspired by the success of edge enhanced
GAN (EEGAN) and ESRGAN, we apply a new edge-enhanced super-resolution GAN
(EESRGAN) to improve the image quality of remote sensing images and use
different detector networks in an end-to-end manner where detector loss is
backpropagated into the EESRGAN to improve the detection performance. We
propose an architecture with three components: ESRGAN, Edge Enhancement Network
(EEN), and Detection network. We use residual-in-residual dense blocks (RRDB)
for both the ESRGAN and EEN, and for the detector network, we use the faster
region-based convolutional network (FRCNN) (two-stage detector) and single-shot
multi-box detector (SSD) (one stage detector). Extensive experiments on a
public (car overhead with context) and a self-assembled (oil and gas storage
tank) satellite dataset show superior performance of our method compared to the
standalone state-of-the-art object detectors.Comment: This paper contains 27 pages and accepted for publication in MDPI
remote sensing journal. GitHub Repository:
https://github.com/Jakaria08/EESRGAN (Implementation
Real-time Aerial Detection and Reasoning on Embedded-UAVs
We present a unified pipeline architecture for a real-time detection system
on an embedded system for UAVs. Neural architectures have been the industry
standard for computer vision. However, most existing works focus solely on
concatenating deeper layers to achieve higher accuracy with run-time
performance as the trade-off. This pipeline of networks can exploit the
domain-specific knowledge on aerial pedestrian detection and activity
recognition for the emerging UAV applications of autonomous surveying and
activity reporting. In particular, our pipeline architectures operate in a
time-sensitive manner, have high accuracy in detecting pedestrians from various
aerial orientations, use a novel attention map for multi-activities
recognition, and jointly refine its detection with temporal information.
Numerically, we demonstrate our model's accuracy and fast inference speed on
embedded systems. We empirically deployed our prototype hardware with full live
feeds in a real-world open-field environment.Comment: In TGR
Multiview Aerial Visual Recognition (MAVREC): Can Multi-view Improve Aerial Visual Perception?
Despite the commercial abundance of UAVs, aerial data acquisition remains
challenging, and the existing Asia and North America-centric open-source UAV
datasets are small-scale or low-resolution and lack diversity in scene
contextuality. Additionally, the color content of the scenes, solar-zenith
angle, and population density of different geographies influence the data
diversity. These two factors conjointly render suboptimal aerial-visual
perception of the deep neural network (DNN) models trained primarily on the
ground-view data, including the open-world foundational models.
To pave the way for a transformative era of aerial detection, we present
Multiview Aerial Visual RECognition or MAVREC, a video dataset where we record
synchronized scenes from different perspectives -- ground camera and
drone-mounted camera. MAVREC consists of around 2.5 hours of industry-standard
2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million
annotated bounding boxes. This makes MAVREC the largest ground and aerial-view
dataset, and the fourth largest among all drone-based datasets across all
modalities and tasks. Through our extensive benchmarking on MAVREC, we
recognize that augmenting object detectors with ground-view images from the
corresponding geographical location is a superior pre-training strategy for
aerial detection. Building on this strategy, we benchmark MAVREC with a
curriculum-based semi-supervised object detection approach that leverages
labeled (ground and aerial) and unlabeled (only aerial) images to enhance the
aerial detection. We publicly release the MAVREC dataset:
https://mavrec.github.io
- β¦