63 research outputs found
무μΈλΉν체 νμ¬ μ΄νμ λ° μ€νμ μ΄λ―Έμ§λ₯Ό νμ©ν μΌμλλ¬Ό νμ§ κ°λ₯μ± μ°κ΅¬
νμλ
Όλ¬Έ(μμ¬) -- μμΈλνκ΅λνμ : νκ²½λνμ νκ²½μ‘°κ²½νκ³Ό, 2022.2. μ‘μκ·Ό.μΌμλλ¬Όμ νμ§μ λͺ¨λν°λ§μ μν΄, νμ₯ μ§μ κ΄μ°°, ν¬ν-μ¬ν¬νκ³Ό κ°μ μ ν΅μ μ‘°μ¬ λ°©λ²μ΄ λ€μν λͺ©μ μΌλ‘ μνλμ΄μλ€. νμ§λ§, μ΄λ¬ν λ°©λ²λ€μ λ§μ μκ°κ³Ό μλμ μΌλ‘ λΉμΌ λΉμ©μ΄ νμνλ©°, μ λ’° κ°λ₯ν νμ§ κ²°κ³Όλ₯Ό μ»κΈ° μν΄μ μλ ¨λ νμ₯ μ λ¬Έκ°κ° νμνλ€. κ²λ€κ°, μ ν΅μ μΈ νμ₯ μ‘°μ¬ λ°©λ²μ νμ₯μμ μΌμλλ¬Όμ λ§μ£ΌμΉλ λ± μνν μν©μ μ²ν μ μλ€. μ΄μ λ°λΌ, μΉ΄λ©λΌ νΈλν, GPS μΆμ , eDNA μνλ§κ³Ό κ°μ μ격 μ‘°μ¬ λ°©λ²μ΄ κΈ°μ‘΄μ μ ν΅μ μ‘°μ¬λ°©λ²μ λ체νλ©° λμ± λΉλ²ν μ¬μ©λκΈ° μμνλ€. νμ§λ§, μ΄λ¬ν λ°©λ²λ€μ μ¬μ ν λͺ©νλ‘ νλ λμμ μ 체 λ©΄μ κ³Ό, κ°λ³ κ°μ²΄λ₯Ό νμ§ν μ μλ€λ νκ³λ₯Ό κ°μ§κ³ μλ€.
μ΄λ¬ν νκ³λ₯Ό 극볡νκΈ° μν΄, 무μΈλΉν체 (UAV, Unmanned Aerial Vehicle)κ° μΌμλλ¬Ό νμ§μ λμ€μ μΈ λκ΅¬λ‘ μ리맀κΉνκ³ μλ€. UAVμ κ°μ₯ ν° μ₯μ μ, μ λͺ
νκ³ μ΄μ΄ν κ³΅κ° λ° μκ°ν΄μλμ ν¨κ» μ 체 μ°κ΅¬ μ§μμ λν λλ¬Ό νμ§κ° κ°λ₯νλ€λ κ²μ΄λ€. μ΄μ λν΄, UAVλ₯Ό μ¬μ©ν¨μΌλ‘μ¨, μ κ·ΌνκΈ° μ΄λ €μ΄ μ§μμ΄λ μνν κ³³μ λν μ‘°μ¬κ° κ°λ₯ν΄μ§λ€. νμ§λ§, μ΄λ¬ν μ΄μ μΈμ, UAVμ λ¨μ λ λͺ
νν μ‘΄μ¬νλ€. λμμ§, λΉν μλ λ° λμ΄ λ±κ³Ό κ°μ΄ UAVλ₯Ό μ¬μ©νλ νκ²½μ λ°λΌ, μμ λλ¬Ό, μΈμ°½ν μ²μμ μλ κ°μ²΄, λΉ λ₯΄κ² μμ§μ΄λ λλ¬Όμ νμ§νλ κ²μ΄ μ νλλ€. λν, κΈ°μνκ²½μ λ°λΌμλ λΉνμ΄ λΆκ°ν μ μκ³ , λ°°ν°λ¦¬ μ©λμΌλ‘ μΈν λΉνμκ°μ μ νλ μ‘΄μ¬νλ€. νμ§λ§, μ λ°ν νμ§κ° λΆκ°λ₯νλλΌλ, μ΄μ κ΄λ ¨ μ°κ΅¬κ° κΎΈμ€ν μνλκ³ μμΌλ©°, μ νμ°κ΅¬λ€μ μ‘μ λ° ν΄μ ν¬μ λ₯, μ‘°λ₯, κ·Έλ¦¬κ³ νμΆ©λ₯ λ±μ νμ§νλ λ°μ μ±κ³΅νμλ€.
UAVλ₯Ό ν΅ν΄ μ»μ΄μ§λ κ°μ₯ λνμ μΈ λ°μ΄ν°λ μ€νμ μ΄λ―Έμ§μ΄λ€. μ΄λ₯Ό μ¬μ©ν΄ λ¨Έμ λ¬λ λ° λ₯λ¬λ (ML-DL, Machine Learning and Deep Learning) λ°©λ²μ΄ μ£Όλ‘ μ¬μ©λκ³ μλ€. μ΄λ¬ν λ°©λ²μ μλμ μΌλ‘ μ νν νμ§ κ²°κ³Όλ₯Ό 보μ¬μ£Όμ§λ§, νΉμ μ’
μ νμ§ν μ μλ λͺ¨λΈμ κ°λ°μ μν΄μ μ΅μν μ² μ₯μ μ΄λ―Έμ§κ° νμνλ€. μ€νμ μ΄λ―Έμ§ μΈμλ, μ΄νμ μ΄λ―Έμ§ λν UAVλ₯Ό ν΅ν΄ νλ λ μ μλ€. μ΄νμ μΌμ κΈ°μ μ κ°λ°κ³Ό μΌμ κ°κ²©μ νλ½μ λ§μ μΌμλλ¬Ό μ°κ΅¬μλ€μ κ΄μ¬μ μ¬λ‘μ‘μλ€. μ΄νμ μΉ΄λ©λΌλ₯Ό μ¬μ©νλ©΄ λλ¬Όμ 체μ¨κ³Ό μ£Όλ³νκ²½κ³Όμ μ¨λ μ°¨μ΄λ₯Ό ν΅ν΄ μ μ¨λλ¬Όμ νμ§νλ κ²μ΄ κ°λ₯νλ€. νμ§λ§, μλ‘μ΄ λ°μ΄ν°κ° μ¬μ©λλλΌλ, μ¬μ ν ML-DL λ°©λ²μ΄ λλ¬Ό νμ§μ μ£Όλ‘ μ¬μ©λκ³ μμΌλ©°, μ΄λ¬ν λ°©λ²μ UAVλ₯Ό νμ©ν μΌμλλ¬Όμ μ€μκ° νμ§λ₯Ό μ ννλ€.
λ°λΌμ, λ³Έ μ°κ΅¬λ μ΄νμκ³Ό μ€νμ μ΄λ―Έμ§λ₯Ό νμ©ν λλ¬Ό μλ νμ§ λ°©λ²μ κ°λ°κ³Ό, κ°λ°λ λ°©λ²μ΄ μ΄μ λ°©λ²λ€μ νκ· μ΄μμ μ νλμ ν¨κ» νμ₯μμ μ€μκ°μΌλ‘ μ¬μ©λ μ μλλ‘ νλ κ²μ λͺ©νλ‘ νλ€.For wildlife detection and monitoring, traditional methods such as direct observation and capture-recapture have been carried out for diverse purposes. However, these methods require a large amount of time, considerable expense, and field-skilled experts to obtain reliable results. Furthermore, performing a traditional field survey can result in dangerous situations, such as an encounter with wild animals. Remote monitoring methods, such as those based on camera trapping, GPS collars, and environmental DNA sampling, have been used more frequently, mostly replacing traditional survey methods, as the technologies have developed. But these methods still have limitations, such as the lack of ability to cover an entire region or detect individual targets.
To overcome those limitations, the unmanned aerial vehicle (UAV) is becoming a popular tool for conducting a wildlife census. The main benefits of UAVs are able to detect animals remotely covering a wider region with clear and fine spatial and temporal resolutions. In addition, by operating UAVs investigate hard to access or dangerous areas become possible. However, besides these advantages, the limitations of UAVs clearly exist. By UAV operating environments such as study site, flying height or speed, the ability to detect small animals, targets in the dense forest, tracking fast-moving animals can be limited. And by the weather, operating UAV is unable, and the flight time is limited by the battery matters. Although detailed detection is unavailable, related researches are developing and previous studies used UAV to detect terrestrial and marine mammals, avian and reptile species.
The most common type of data acquired by UAVs is RGB images. Using these images, machine-learning and deep-learning (MLβDL) methods were mainly used for wildlife detection. MLβDL methods provide relatively accurate results, but at least 1,000 images are required to develop a proper detection model for specific species. Instead of RGB images, thermal images can be acquired by a UAV. The development of thermal sensor technology and sensor price reduction has attracted the interest of wildlife researchers. Using a thermal camera, homeothermic animals can be detected based on the temperature difference between their bodies and the surrounding environment. Although the technology and data are new, the same MLβDL methods were typically used for animal detection. These ML-DL methods limit the use of UAVs for real-time wildlife detection in the field.
Therefore, this paper aims to develop an automated animal detection method with thermal and RGB image datasets and to utilize it under in situ conditions in real-time while ensuring the average-above detection ability of previous methods.Abstract I
Contents IV
List of Tables VII
List of Figures VIII
Chapter 1. Introduction 1
1.1 Research background 1
1.2 Research goals and objectives 10
1.2.1 Research goals 10
1.2.2 Research objectives 11
1.3 Theoretical background 13
1.3.1 Concept of the UAV 13
1.3.2 Concept of the thermal camera 13
Chapter 2. Methods 15
2.1 Study site 15
2.2 Data acquisition and preprocessing 16
2.2.1 Data acquisition 16
2.2.2 RGB lens distortion correction and clipping 19
2.2.3 Thermal image correction by fur color 21
2.2.4 Unnatural object removal 22
2.3 Animal detection 24
2.3.1 Sobel edge creation and contour generation 24
2.3.2 Object detection and sorting 26
Chapter 3. Results 30
3.1 Number of counted objects 31
3.2 Time costs of image types 33
Chapter 4. Discussion 36
4.1 Reference comparison 36
4.2 Instant detection 40
4.3 Supplemental usage 41
4.4 Utility of thermal sensors 42
4.5 Applications in other fields 43
Chapter 5. Conclusions 47
References 49
Appendix: Glossary 61
μ΄λ‘ 62μ
A Survey of Computer Vision Methods for 2D Object Detection from Unmanned Aerial Vehicles
The spread of Unmanned Aerial Vehicles (UAVs) in the last decade revolutionized many applications fields. Most investigated research topics focus on increasing autonomy during operational campaigns, environmental monitoring, surveillance, maps, and labeling. To achieve such complex goals, a high-level module is exploited to build semantic knowledge leveraging the outputs of the low-level module that takes data acquired from multiple sensors and extracts information concerning what is sensed. All in all, the detection of the objects is undoubtedly the most important low-level task, and the most employed sensors to accomplish it are by far RGB cameras due to costs, dimensions, and the wide literature on RGB-based object detection. This survey presents recent advancements in 2D object detection for the case of UAVs, focusing on the differences, strategies, and trade-offs between the generic problem of object detection, and the adaptation of such solutions for operations of the UAV. Moreover, a new taxonomy that considers different heights intervals and driven by the methodological approaches introduced by the works in the state of the art instead of hardware, physical and/or technological constraints is proposed
Assessing thermal imagery integration into object detection methods on ground-based and air-based collection platforms
Object detection models commonly deployed on uncrewed aerial systems (UAS)
focus on identifying objects in the visible spectrum using Red-Green-Blue (RGB)
imagery. However, there is growing interest in fusing RGB with thermal long
wave infrared (LWIR) images to increase the performance of object detection
machine learning (ML) models. Currently LWIR ML models have received less
research attention, especially for both ground- and air-based platforms,
leading to a lack of baseline performance metrics evaluating LWIR, RGB and
LWIR-RGB fused object detection models. Therefore, this research contributes
such quantitative metrics to the literature. The results found that the
ground-based blended RGB-LWIR model exhibited superior performance compared to
the RGB or LWIR approaches, achieving a mAP of 98.4%. Additionally, the blended
RGB-LWIR model was also the only object detection model to work in both day and
night conditions, providing superior operational capabilities. This research
additionally contributes a novel labelled training dataset of 12,600 images for
RGB, LWIR, and RGB-LWIR fused imagery, collected from ground-based and
air-based platforms, enabling further multispectral machine-driven object
detection research.Comment: 18 pages, 12 figures, 2 table
An evaluation of the factors affecting βpoacherβ detection with drones and the efficacy of machine-learning for detection
Drones are being increasingly used in conservation to tackle the illegal poaching of animals. An important aspect of using drones for this purpose is establishing the technological and the environmental factors that increase the chances of success when detecting poachers. Recent studies focused on investigating these factors, and this research builds upon this as well as exploring the efficacy of machine-learning for automated detection. In an experimental setting with voluntary test subjects, various factors were tested for their effect on detection probability: camera type (visible spectrum, RGB, and thermal infrared, TIR), time of day, camera angle, canopy density, and walking/stationary test subjects. The drone footage was analysed both manually by volunteers and through automated detection software. A generalised linear model with a logit link function was used to statistically analyse the data for both types of analysis. The findings concluded that using a TIR camera improved detection probability, particularly at dawn and with a 90Β° camera angle. An oblique angle was more effective during RGB flights, and walking/stationary test subjects did not influence detection with both cameras. Probability of detection decreased with increasing vegetation cover. Machine-learning software had a successful detection probability of 0.558, however, it produced nearly five times more false positives than manual analysis. Manual analysis, however, produced 2.5 times more false negatives than automated detection. Despite manual analysis producing more true positive detections than automated detection in this study, the automated software gives promising, successful results, and the advantages of automated methods over manual analysis make it a promising tool with the potential to be successfully incorporated into anti-poaching strategies
Tele-ultrasound imaging using smartphones and single-board PCs
BACKGROUND: Mobile devices are widely available and their computational performance increases. Nonetheless, medicine should not be an exception: single-board computers and mobile phones are crucial aides in telehealth.
AIM: To explore tele-ultrasound scope using smartphones and single-board computers
MATERIALS AND METHODS: This study focused on capturing ultrasound videos using external video recording devices connected via USB. Raspberry Pi single-board computers and Android smartphones have been used as platforms to host a tele-ultrasound server. Used software: VLC, Motion, and USB camera. A remote expert assessment was performed with mobile devices using the following software: VLC acted as a VLC server, Google Chrome for OS Windows 7 and OS Android was used in the remaining scenarios, and Chromium browser was installed on the Raspberry Pi computer.
OUTCOMES: The UTV007 chip-based video capture device produces better images than the AMT630A-based device. The optimum video resolution was 720576 and 25 frames per second. VLC and OBS studios are considered the most suitable for a raspberry-based ultrasound system owing to low equipment and bandwidth requirements (0.640.17 Mbps for VLC; 0.5 Mbps for OBS studio). For Android phone OS, the ultrasound system was set with the USB camera software, although it required a faster network connection speed (5.20.3 Mbps).
CONCLUSION: The use of devices based on single-board computers and smartphones implements a low-cost tele-ultrasound system, which potentially improves the quality of studies performed through distance learning and consulting doctors. These solutions can be used in remote regions for field medicine tasks and other possible areas of m-health
A Comprehensive Review of AI-enabled Unmanned Aerial Vehicle: Trends, Vision , and Challenges
In recent years, the combination of artificial intelligence (AI) and unmanned
aerial vehicles (UAVs) has brought about advancements in various areas. This
comprehensive analysis explores the changing landscape of AI-powered UAVs and
friendly computing in their applications. It covers emerging trends, futuristic
visions, and the inherent challenges that come with this relationship. The
study examines how AI plays a role in enabling navigation, detecting and
tracking objects, monitoring wildlife, enhancing precision agriculture,
facilitating rescue operations, conducting surveillance activities, and
establishing communication among UAVs using environmentally conscious computing
techniques. By delving into the interaction between AI and UAVs, this analysis
highlights the potential for these technologies to revolutionise industries
such as agriculture, surveillance practices, disaster management strategies,
and more. While envisioning possibilities, it also takes a look at ethical
considerations, safety concerns, regulatory frameworks to be established, and
the responsible deployment of AI-enhanced UAV systems. By consolidating
insights from research endeavours in this field, this review provides an
understanding of the evolving landscape of AI-powered UAVs while setting the
stage for further exploration in this transformative domain
DEVELOPMENT OF GRASS-ROOTS DATA COLLECTION METHODS IN RURAL, ISOLATED, AND TRIBAL COMMUNITIES
While extensive procedures have been developed for the collection and dissemination of motor vehicle volumes and speeds, these same procedures cannot always be used to collect pedestrian data, given the comparably unpredictable behavior of pedestrians and their smaller physical size. There is significant value to developing lower cost, lower intrusion methods of collecting pedestrian travel data, and these collection efforts are needed at the local or βgrass-rootsβ level. While previous studies have documented many different data collection methods, one newer option considers the use of drones. This study examined its feasibility to collect pedestrian data and used this technology as part of a school travel mode case study. Specific information with regard to the study methodology, permissions required, and final results are described in detail as part of this report.
This study concluded that while purchasing and owning a drone requires relatively minimal investment, the initial steps required to operate a drone, along with processing time required to analyze the data collected, represent up-front barriers that may prevent widespread usage at this time. However, the use of drones and the opportunities that it presents in the long-term offer promising outcomes
- β¦