63 research outputs found

    무인비행체 νƒ‘μž¬ 열화상 및 싀화상 이미지λ₯Ό ν™œμš©ν•œ 야생동물 탐지 κ°€λŠ₯μ„± 연ꡬ

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(석사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : ν™˜κ²½λŒ€ν•™μ› ν™˜κ²½μ‘°κ²½ν•™κ³Ό, 2022.2. μ†‘μ˜κ·Ό.μ•Όμƒλ™λ¬Όμ˜ 탐지와 λͺ¨λ‹ˆν„°λ§μ„ μœ„ν•΄, ν˜„μž₯ 직접 κ΄€μ°°, 포획-재포획과 같은 전톡적 쑰사 방법이 λ‹€μ–‘ν•œ λͺ©μ μœΌλ‘œ μˆ˜ν–‰λ˜μ–΄μ™”λ‹€. ν•˜μ§€λ§Œ, μ΄λŸ¬ν•œ 방법듀은 λ§Žμ€ μ‹œκ°„κ³Ό μƒλŒ€μ μœΌλ‘œ λΉ„μ‹Ό λΉ„μš©μ΄ ν•„μš”ν•˜λ©°, μ‹ λ’° κ°€λŠ₯ν•œ 탐지 κ²°κ³Όλ₯Ό μ–»κΈ° μœ„ν•΄μ„  μˆ™λ ¨λœ ν˜„μž₯ μ „λ¬Έκ°€κ°€ ν•„μš”ν•˜λ‹€. κ²Œλ‹€κ°€, 전톡적인 ν˜„μž₯ 쑰사 방법은 ν˜„μž₯μ—μ„œ 야생동물을 λ§ˆμ£ΌμΉ˜λŠ” λ“± μœ„ν—˜ν•œ 상황에 μ²˜ν•  수 μžˆλ‹€. 이에 따라, 카메라 νŠΈλž˜ν•‘, GPS 좔적, eDNA μƒ˜ν”Œλ§κ³Ό 같은 원격 쑰사 방법이 기쑴의 전톡적 쑰사방법을 λŒ€μ²΄ν•˜λ©° λ”μš± 빈번히 μ‚¬μš©λ˜κΈ° μ‹œμž‘ν–ˆλ‹€. ν•˜μ§€λ§Œ, μ΄λŸ¬ν•œ 방법듀은 μ—¬μ „νžˆ λͺ©ν‘œλ‘œ ν•˜λŠ” λŒ€μƒμ˜ 전체 면적과, κ°œλ³„ 개체λ₯Ό 탐지할 수 μ—†λ‹€λŠ” ν•œκ³„λ₯Ό 가지고 μžˆλ‹€. μ΄λŸ¬ν•œ ν•œκ³„λ₯Ό κ·Ήλ³΅ν•˜κΈ° μœ„ν•΄, 무인비행체 (UAV, Unmanned Aerial Vehicle)κ°€ 야생동물 νƒμ§€μ˜ λŒ€μ€‘μ μΈ λ„κ΅¬λ‘œ μžλ¦¬λ§€κΉ€ν•˜κ³  μžˆλ‹€. UAV의 κ°€μž₯ 큰 μž₯점은, μ„ λͺ…ν•˜κ³  μ΄˜μ΄˜ν•œ 곡간 및 μ‹œκ°„ν•΄μƒλ„μ™€ ν•¨κ»˜ 전체 연ꡬ 지역에 λŒ€ν•œ 동물 탐지가 κ°€λŠ₯ν•˜λ‹€λŠ” 것이닀. 이에 더해, UAVλ₯Ό μ‚¬μš©ν•¨μœΌλ‘œμ¨, μ ‘κ·Όν•˜κΈ° μ–΄λ €μš΄ μ§€μ—­μ΄λ‚˜ μœ„ν—˜ν•œ 곳에 λŒ€ν•œ 쑰사가 κ°€λŠ₯해진닀. ν•˜μ§€λ§Œ, μ΄λŸ¬ν•œ 이점 외에, UAV의 단점도 λͺ…ν™•νžˆ μ‘΄μž¬ν•œλ‹€. λŒ€μƒμ§€, λΉ„ν–‰ 속도 및 높이 λ“±κ³Ό 같이 UAVλ₯Ό μ‚¬μš©ν•˜λŠ” ν™˜κ²½μ— 따라, μž‘μ€ 동물, μšΈμ°½ν•œ μˆ²μ†μ— μžˆλŠ” 개체, λΉ λ₯΄κ²Œ μ›€μ§μ΄λŠ” 동물을 νƒμ§€ν•˜λŠ” 것이 μ œν•œλœλ‹€. λ˜ν•œ, κΈ°μƒν™˜κ²½μ— λ”°λΌμ„œλ„ 비행이 λΆˆκ°€ν•  수 있고, 배터리 μš©λŸ‰μœΌλ‘œ μΈν•œ λΉ„ν–‰μ‹œκ°„μ˜ μ œν•œλ„ μ‘΄μž¬ν•œλ‹€. ν•˜μ§€λ§Œ, μ •λ°€ν•œ 탐지가 λΆˆκ°€λŠ₯ν•˜λ”λΌλ„, 이와 κ΄€λ ¨ 연ꡬ가 κΎΈμ€€νžˆ μˆ˜ν–‰λ˜κ³  있으며, 선행연ꡬ듀은 μœ‘μƒ 및 해상 포유λ₯˜, μ‘°λ₯˜, 그리고 파좩λ₯˜ 등을 νƒμ§€ν•˜λŠ” 데에 μ„±κ³΅ν•˜μ˜€λ‹€. UAVλ₯Ό 톡해 μ–»μ–΄μ§€λŠ” κ°€μž₯ λŒ€ν‘œμ μΈ λ°μ΄ν„°λŠ” 싀화상 이미지이닀. 이λ₯Ό μ‚¬μš©ν•΄ λ¨Έμ‹ λŸ¬λ‹ 및 λ”₯λŸ¬λ‹ (ML-DL, Machine Learning and Deep Learning) 방법이 주둜 μ‚¬μš©λ˜κ³  μžˆλ‹€. μ΄λŸ¬ν•œ 방법은 μƒλŒ€μ μœΌλ‘œ μ •ν™•ν•œ 탐지 κ²°κ³Όλ₯Ό λ³΄μ—¬μ£Όμ§€λ§Œ, νŠΉμ • 쒅을 탐지할 수 μžˆλŠ” λͺ¨λΈμ˜ κ°œλ°œμ„ μœ„ν•΄μ„  μ΅œμ†Œν•œ 천 μž₯의 이미지가 ν•„μš”ν•˜λ‹€. 싀화상 이미지 외에도, 열화상 이미지 λ˜ν•œ UAVλ₯Ό 톡해 νšλ“ 될 수 μžˆλ‹€. 열화상 μ„Όμ„œ 기술의 개발과 μ„Όμ„œ κ°€κ²©μ˜ ν•˜λ½μ€ λ§Žμ€ 야생동물 μ—°κ΅¬μžλ“€μ˜ 관심을 μ‚¬λ‘œμž‘μ•˜λ‹€. 열화상 카메라λ₯Ό μ‚¬μš©ν•˜λ©΄ λ™λ¬Όμ˜ 체온과 μ£Όλ³€ν™˜κ²½κ³Όμ˜ μ˜¨λ„ 차이λ₯Ό 톡해 μ •μ˜¨λ™λ¬Όμ„ νƒμ§€ν•˜λŠ” 것이 κ°€λŠ₯ν•˜λ‹€. ν•˜μ§€λ§Œ, μƒˆλ‘œμš΄ 데이터가 μ‚¬μš©λ˜λ”λΌλ„, μ—¬μ „νžˆ ML-DL 방법이 동물 탐지에 주둜 μ‚¬μš©λ˜κ³  있으며, μ΄λŸ¬ν•œ 방법은 UAVλ₯Ό ν™œμš©ν•œ μ•Όμƒλ™λ¬Όμ˜ μ‹€μ‹œκ°„ 탐지λ₯Ό μ œν•œν•œλ‹€. λ”°λΌμ„œ, λ³Έ μ—°κ΅¬λŠ” 열화상과 싀화상 이미지λ₯Ό ν™œμš©ν•œ 동물 μžλ™ 탐지 λ°©λ²•μ˜ 개발과, 개발된 방법이 이전 λ°©λ²•λ“€μ˜ 평균 μ΄μƒμ˜ 정확도와 ν•¨κ»˜ ν˜„μž₯μ—μ„œ μ‹€μ‹œκ°„μœΌλ‘œ μ‚¬μš©λ  수 μžˆλ„λ‘ ν•˜λŠ” 것을 λͺ©ν‘œλ‘œ ν•œλ‹€.For wildlife detection and monitoring, traditional methods such as direct observation and capture-recapture have been carried out for diverse purposes. However, these methods require a large amount of time, considerable expense, and field-skilled experts to obtain reliable results. Furthermore, performing a traditional field survey can result in dangerous situations, such as an encounter with wild animals. Remote monitoring methods, such as those based on camera trapping, GPS collars, and environmental DNA sampling, have been used more frequently, mostly replacing traditional survey methods, as the technologies have developed. But these methods still have limitations, such as the lack of ability to cover an entire region or detect individual targets. To overcome those limitations, the unmanned aerial vehicle (UAV) is becoming a popular tool for conducting a wildlife census. The main benefits of UAVs are able to detect animals remotely covering a wider region with clear and fine spatial and temporal resolutions. In addition, by operating UAVs investigate hard to access or dangerous areas become possible. However, besides these advantages, the limitations of UAVs clearly exist. By UAV operating environments such as study site, flying height or speed, the ability to detect small animals, targets in the dense forest, tracking fast-moving animals can be limited. And by the weather, operating UAV is unable, and the flight time is limited by the battery matters. Although detailed detection is unavailable, related researches are developing and previous studies used UAV to detect terrestrial and marine mammals, avian and reptile species. The most common type of data acquired by UAVs is RGB images. Using these images, machine-learning and deep-learning (ML–DL) methods were mainly used for wildlife detection. ML–DL methods provide relatively accurate results, but at least 1,000 images are required to develop a proper detection model for specific species. Instead of RGB images, thermal images can be acquired by a UAV. The development of thermal sensor technology and sensor price reduction has attracted the interest of wildlife researchers. Using a thermal camera, homeothermic animals can be detected based on the temperature difference between their bodies and the surrounding environment. Although the technology and data are new, the same ML–DL methods were typically used for animal detection. These ML-DL methods limit the use of UAVs for real-time wildlife detection in the field. Therefore, this paper aims to develop an automated animal detection method with thermal and RGB image datasets and to utilize it under in situ conditions in real-time while ensuring the average-above detection ability of previous methods.Abstract I Contents IV List of Tables VII List of Figures VIII Chapter 1. Introduction 1 1.1 Research background 1 1.2 Research goals and objectives 10 1.2.1 Research goals 10 1.2.2 Research objectives 11 1.3 Theoretical background 13 1.3.1 Concept of the UAV 13 1.3.2 Concept of the thermal camera 13 Chapter 2. Methods 15 2.1 Study site 15 2.2 Data acquisition and preprocessing 16 2.2.1 Data acquisition 16 2.2.2 RGB lens distortion correction and clipping 19 2.2.3 Thermal image correction by fur color 21 2.2.4 Unnatural object removal 22 2.3 Animal detection 24 2.3.1 Sobel edge creation and contour generation 24 2.3.2 Object detection and sorting 26 Chapter 3. Results 30 3.1 Number of counted objects 31 3.2 Time costs of image types 33 Chapter 4. Discussion 36 4.1 Reference comparison 36 4.2 Instant detection 40 4.3 Supplemental usage 41 4.4 Utility of thermal sensors 42 4.5 Applications in other fields 43 Chapter 5. Conclusions 47 References 49 Appendix: Glossary 61 초둝 62석

    A Survey of Computer Vision Methods for 2D Object Detection from Unmanned Aerial Vehicles

    Get PDF
    The spread of Unmanned Aerial Vehicles (UAVs) in the last decade revolutionized many applications fields. Most investigated research topics focus on increasing autonomy during operational campaigns, environmental monitoring, surveillance, maps, and labeling. To achieve such complex goals, a high-level module is exploited to build semantic knowledge leveraging the outputs of the low-level module that takes data acquired from multiple sensors and extracts information concerning what is sensed. All in all, the detection of the objects is undoubtedly the most important low-level task, and the most employed sensors to accomplish it are by far RGB cameras due to costs, dimensions, and the wide literature on RGB-based object detection. This survey presents recent advancements in 2D object detection for the case of UAVs, focusing on the differences, strategies, and trade-offs between the generic problem of object detection, and the adaptation of such solutions for operations of the UAV. Moreover, a new taxonomy that considers different heights intervals and driven by the methodological approaches introduced by the works in the state of the art instead of hardware, physical and/or technological constraints is proposed

    Assessing thermal imagery integration into object detection methods on ground-based and air-based collection platforms

    Full text link
    Object detection models commonly deployed on uncrewed aerial systems (UAS) focus on identifying objects in the visible spectrum using Red-Green-Blue (RGB) imagery. However, there is growing interest in fusing RGB with thermal long wave infrared (LWIR) images to increase the performance of object detection machine learning (ML) models. Currently LWIR ML models have received less research attention, especially for both ground- and air-based platforms, leading to a lack of baseline performance metrics evaluating LWIR, RGB and LWIR-RGB fused object detection models. Therefore, this research contributes such quantitative metrics to the literature. The results found that the ground-based blended RGB-LWIR model exhibited superior performance compared to the RGB or LWIR approaches, achieving a mAP of 98.4%. Additionally, the blended RGB-LWIR model was also the only object detection model to work in both day and night conditions, providing superior operational capabilities. This research additionally contributes a novel labelled training dataset of 12,600 images for RGB, LWIR, and RGB-LWIR fused imagery, collected from ground-based and air-based platforms, enabling further multispectral machine-driven object detection research.Comment: 18 pages, 12 figures, 2 table

    An evaluation of the factors affecting β€˜poacher’ detection with drones and the efficacy of machine-learning for detection

    Get PDF
    Drones are being increasingly used in conservation to tackle the illegal poaching of animals. An important aspect of using drones for this purpose is establishing the technological and the environmental factors that increase the chances of success when detecting poachers. Recent studies focused on investigating these factors, and this research builds upon this as well as exploring the efficacy of machine-learning for automated detection. In an experimental setting with voluntary test subjects, various factors were tested for their effect on detection probability: camera type (visible spectrum, RGB, and thermal infrared, TIR), time of day, camera angle, canopy density, and walking/stationary test subjects. The drone footage was analysed both manually by volunteers and through automated detection software. A generalised linear model with a logit link function was used to statistically analyse the data for both types of analysis. The findings concluded that using a TIR camera improved detection probability, particularly at dawn and with a 90Β° camera angle. An oblique angle was more effective during RGB flights, and walking/stationary test subjects did not influence detection with both cameras. Probability of detection decreased with increasing vegetation cover. Machine-learning software had a successful detection probability of 0.558, however, it produced nearly five times more false positives than manual analysis. Manual analysis, however, produced 2.5 times more false negatives than automated detection. Despite manual analysis producing more true positive detections than automated detection in this study, the automated software gives promising, successful results, and the advantages of automated methods over manual analysis make it a promising tool with the potential to be successfully incorporated into anti-poaching strategies

    Tele-ultrasound imaging using smartphones and single-board PCs

    Get PDF
    BACKGROUND: Mobile devices are widely available and their computational performance increases. Nonetheless, medicine should not be an exception: single-board computers and mobile phones are crucial aides in telehealth. AIM: To explore tele-ultrasound scope using smartphones and single-board computers MATERIALS AND METHODS: This study focused on capturing ultrasound videos using external video recording devices connected via USB. Raspberry Pi single-board computers and Android smartphones have been used as platforms to host a tele-ultrasound server. Used software: VLC, Motion, and USB camera. A remote expert assessment was performed with mobile devices using the following software: VLC acted as a VLC server, Google Chrome for OS Windows 7 and OS Android was used in the remaining scenarios, and Chromium browser was installed on the Raspberry Pi computer. OUTCOMES: The UTV007 chip-based video capture device produces better images than the AMT630A-based device. The optimum video resolution was 720576 and 25 frames per second. VLC and OBS studios are considered the most suitable for a raspberry-based ultrasound system owing to low equipment and bandwidth requirements (0.640.17 Mbps for VLC; 0.5 Mbps for OBS studio). For Android phone OS, the ultrasound system was set with the USB camera software, although it required a faster network connection speed (5.20.3 Mbps). CONCLUSION: The use of devices based on single-board computers and smartphones implements a low-cost tele-ultrasound system, which potentially improves the quality of studies performed through distance learning and consulting doctors. These solutions can be used in remote regions for field medicine tasks and other possible areas of m-health

    A Comprehensive Review of AI-enabled Unmanned Aerial Vehicle: Trends, Vision , and Challenges

    Full text link
    In recent years, the combination of artificial intelligence (AI) and unmanned aerial vehicles (UAVs) has brought about advancements in various areas. This comprehensive analysis explores the changing landscape of AI-powered UAVs and friendly computing in their applications. It covers emerging trends, futuristic visions, and the inherent challenges that come with this relationship. The study examines how AI plays a role in enabling navigation, detecting and tracking objects, monitoring wildlife, enhancing precision agriculture, facilitating rescue operations, conducting surveillance activities, and establishing communication among UAVs using environmentally conscious computing techniques. By delving into the interaction between AI and UAVs, this analysis highlights the potential for these technologies to revolutionise industries such as agriculture, surveillance practices, disaster management strategies, and more. While envisioning possibilities, it also takes a look at ethical considerations, safety concerns, regulatory frameworks to be established, and the responsible deployment of AI-enhanced UAV systems. By consolidating insights from research endeavours in this field, this review provides an understanding of the evolving landscape of AI-powered UAVs while setting the stage for further exploration in this transformative domain

    DEVELOPMENT OF GRASS-ROOTS DATA COLLECTION METHODS IN RURAL, ISOLATED, AND TRIBAL COMMUNITIES

    Get PDF
    While extensive procedures have been developed for the collection and dissemination of motor vehicle volumes and speeds, these same procedures cannot always be used to collect pedestrian data, given the comparably unpredictable behavior of pedestrians and their smaller physical size. There is significant value to developing lower cost, lower intrusion methods of collecting pedestrian travel data, and these collection efforts are needed at the local or β€œgrass-roots” level. While previous studies have documented many different data collection methods, one newer option considers the use of drones. This study examined its feasibility to collect pedestrian data and used this technology as part of a school travel mode case study. Specific information with regard to the study methodology, permissions required, and final results are described in detail as part of this report. This study concluded that while purchasing and owning a drone requires relatively minimal investment, the initial steps required to operate a drone, along with processing time required to analyze the data collected, represent up-front barriers that may prevent widespread usage at this time. However, the use of drones and the opportunities that it presents in the long-term offer promising outcomes
    • …
    corecore