9 research outputs found

    무인비행체 탑재 열화상 및 실화상 이미지를 활용한 야생동물 탐지 가능성 연구

    Get PDF
    학위논문(석사) -- 서울대학교대학원 : 환경대학원 환경조경학과, 2022.2. 송영근.야생동물의 탐지와 모니터링을 위해, 현장 직접 관찰, 포획-재포획과 같은 전통적 조사 방법이 다양한 목적으로 수행되어왔다. 하지만, 이러한 방법들은 많은 시간과 상대적으로 비싼 비용이 필요하며, 신뢰 가능한 탐지 결과를 얻기 위해선 숙련된 현장 전문가가 필요하다. 게다가, 전통적인 현장 조사 방법은 현장에서 야생동물을 마주치는 등 위험한 상황에 처할 수 있다. 이에 따라, 카메라 트래핑, GPS 추적, eDNA 샘플링과 같은 원격 조사 방법이 기존의 전통적 조사방법을 대체하며 더욱 빈번히 사용되기 시작했다. 하지만, 이러한 방법들은 여전히 목표로 하는 대상의 전체 면적과, 개별 개체를 탐지할 수 없다는 한계를 가지고 있다. 이러한 한계를 극복하기 위해, 무인비행체 (UAV, Unmanned Aerial Vehicle)가 야생동물 탐지의 대중적인 도구로 자리매김하고 있다. UAV의 가장 큰 장점은, 선명하고 촘촘한 공간 및 시간해상도와 함께 전체 연구 지역에 대한 동물 탐지가 가능하다는 것이다. 이에 더해, UAV를 사용함으로써, 접근하기 어려운 지역이나 위험한 곳에 대한 조사가 가능해진다. 하지만, 이러한 이점 외에, UAV의 단점도 명확히 존재한다. 대상지, 비행 속도 및 높이 등과 같이 UAV를 사용하는 환경에 따라, 작은 동물, 울창한 숲속에 있는 개체, 빠르게 움직이는 동물을 탐지하는 것이 제한된다. 또한, 기상환경에 따라서도 비행이 불가할 수 있고, 배터리 용량으로 인한 비행시간의 제한도 존재한다. 하지만, 정밀한 탐지가 불가능하더라도, 이와 관련 연구가 꾸준히 수행되고 있으며, 선행연구들은 육상 및 해상 포유류, 조류, 그리고 파충류 등을 탐지하는 데에 성공하였다. UAV를 통해 얻어지는 가장 대표적인 데이터는 실화상 이미지이다. 이를 사용해 머신러닝 및 딥러닝 (ML-DL, Machine Learning and Deep Learning) 방법이 주로 사용되고 있다. 이러한 방법은 상대적으로 정확한 탐지 결과를 보여주지만, 특정 종을 탐지할 수 있는 모델의 개발을 위해선 최소한 천 장의 이미지가 필요하다. 실화상 이미지 외에도, 열화상 이미지 또한 UAV를 통해 획득 될 수 있다. 열화상 센서 기술의 개발과 센서 가격의 하락은 많은 야생동물 연구자들의 관심을 사로잡았다. 열화상 카메라를 사용하면 동물의 체온과 주변환경과의 온도 차이를 통해 정온동물을 탐지하는 것이 가능하다. 하지만, 새로운 데이터가 사용되더라도, 여전히 ML-DL 방법이 동물 탐지에 주로 사용되고 있으며, 이러한 방법은 UAV를 활용한 야생동물의 실시간 탐지를 제한한다. 따라서, 본 연구는 열화상과 실화상 이미지를 활용한 동물 자동 탐지 방법의 개발과, 개발된 방법이 이전 방법들의 평균 이상의 정확도와 함께 현장에서 실시간으로 사용될 수 있도록 하는 것을 목표로 한다.For wildlife detection and monitoring, traditional methods such as direct observation and capture-recapture have been carried out for diverse purposes. However, these methods require a large amount of time, considerable expense, and field-skilled experts to obtain reliable results. Furthermore, performing a traditional field survey can result in dangerous situations, such as an encounter with wild animals. Remote monitoring methods, such as those based on camera trapping, GPS collars, and environmental DNA sampling, have been used more frequently, mostly replacing traditional survey methods, as the technologies have developed. But these methods still have limitations, such as the lack of ability to cover an entire region or detect individual targets. To overcome those limitations, the unmanned aerial vehicle (UAV) is becoming a popular tool for conducting a wildlife census. The main benefits of UAVs are able to detect animals remotely covering a wider region with clear and fine spatial and temporal resolutions. In addition, by operating UAVs investigate hard to access or dangerous areas become possible. However, besides these advantages, the limitations of UAVs clearly exist. By UAV operating environments such as study site, flying height or speed, the ability to detect small animals, targets in the dense forest, tracking fast-moving animals can be limited. And by the weather, operating UAV is unable, and the flight time is limited by the battery matters. Although detailed detection is unavailable, related researches are developing and previous studies used UAV to detect terrestrial and marine mammals, avian and reptile species. The most common type of data acquired by UAVs is RGB images. Using these images, machine-learning and deep-learning (ML–DL) methods were mainly used for wildlife detection. ML–DL methods provide relatively accurate results, but at least 1,000 images are required to develop a proper detection model for specific species. Instead of RGB images, thermal images can be acquired by a UAV. The development of thermal sensor technology and sensor price reduction has attracted the interest of wildlife researchers. Using a thermal camera, homeothermic animals can be detected based on the temperature difference between their bodies and the surrounding environment. Although the technology and data are new, the same ML–DL methods were typically used for animal detection. These ML-DL methods limit the use of UAVs for real-time wildlife detection in the field. Therefore, this paper aims to develop an automated animal detection method with thermal and RGB image datasets and to utilize it under in situ conditions in real-time while ensuring the average-above detection ability of previous methods.Abstract I Contents IV List of Tables VII List of Figures VIII Chapter 1. Introduction 1 1.1 Research background 1 1.2 Research goals and objectives 10 1.2.1 Research goals 10 1.2.2 Research objectives 11 1.3 Theoretical background 13 1.3.1 Concept of the UAV 13 1.3.2 Concept of the thermal camera 13 Chapter 2. Methods 15 2.1 Study site 15 2.2 Data acquisition and preprocessing 16 2.2.1 Data acquisition 16 2.2.2 RGB lens distortion correction and clipping 19 2.2.3 Thermal image correction by fur color 21 2.2.4 Unnatural object removal 22 2.3 Animal detection 24 2.3.1 Sobel edge creation and contour generation 24 2.3.2 Object detection and sorting 26 Chapter 3. Results 30 3.1 Number of counted objects 31 3.2 Time costs of image types 33 Chapter 4. Discussion 36 4.1 Reference comparison 36 4.2 Instant detection 40 4.3 Supplemental usage 41 4.4 Utility of thermal sensors 42 4.5 Applications in other fields 43 Chapter 5. Conclusions 47 References 49 Appendix: Glossary 61 초록 62석

    R3^3-Net: A Deep Network for Multi-oriented Vehicle Detection in Aerial Images and Videos

    Get PDF
    Vehicle detection is a significant and challenging task in aerial remote sensing applications. Most existing methods detect vehicles with regular rectangle boxes and fail to offer the orientation of vehicles. However, the orientation information is crucial for several practical applications, such as the trajectory and motion estimation of vehicles. In this paper, we propose a novel deep network, called rotatable region-based residual network (R3^3-Net), to detect multi-oriented vehicles in aerial images and videos. More specially, R3^3-Net is utilized to generate rotatable rectangular target boxes in a half coordinate system. First, we use a rotatable region proposal network (R-RPN) to generate rotatable region of interests (R-RoIs) from feature maps produced by a deep convolutional neural network. Here, a proposed batch averaging rotatable anchor (BAR anchor) strategy is applied to initialize the shape of vehicle candidates. Next, we propose a rotatable detection network (R-DN) for the final classification and regression of the R-RoIs. In R-DN, a novel rotatable position sensitive pooling (R-PS pooling) is designed to keep the position and orientation information simultaneously while downsampling the feature maps of R-RoIs. In our model, R-RPN and R-DN can be trained jointly. We test our network on two open vehicle detection image datasets, namely DLR 3K Munich Dataset and VEDAI Dataset, demonstrating the high precision and robustness of our method. In addition, further experiments on aerial videos show the good generalization capability of the proposed method and its potential for vehicle tracking in aerial videos. The demo video is available at https://youtu.be/xCYD-tYudN0

    Smartphone-based object recognition with embedded machine learning intelligence for unmanned aerial vehicles

    Get PDF
    Existing artificial intelligence solutions typically operate in powerful platforms with high computational resources availability. However, a growing number of emerging use cases such as those based on unmanned aerial systems (UAS) require new solutions with embedded artificial intelligence on a highly mobile platform. This paper proposes an innovative UAS that explores machine learning (ML) capabilities in a smartphone‐based mobile platform for object detection and recognition applications. A new system framework tailored to this challenging use case is designed with a customized workflow specified. Furthermore, the design of the embedded ML leverages TensorFlow, a cutting‐edge open‐source ML framework. The prototype of the system integrates all the architectural components in a fully functional system, and it is suitable for real‐world operational environments such as seek and rescue use cases. Experimental results validate the design and prototyping of the system and demonstrate an overall improved performance compared with the state of the art in terms of a wide range of metrics

    Semi-automated detection of ungulates using UAV imagery and reflective spectrometry

    Get PDF
    Supplementary electronic material 1: The ‘Adult Arabian Oryx’ rule set.In the field of species conservation, the use of unmanned aerial vehicles (UAV) is increasing in popularity as wildlife observation and monitoring tools. With large datasets created by UAV-based species surveying, the need arose to automate the detection process of the species. Although the use of computer learning algorithms for wildlife detection from UAV-derived imagery is an increasing trend, it depends on a large amount of imagery of the species to train the object detector effectively. However, there are alternatives like object-based image analysis (OBIA) software available if a large amount of imagery of the species is not available to develop a computer-learned object detector. The study tested the semi-automated detection of reintroduced Arabian Oryx (O. leucoryx), using the specie's coat sRGB-colour profiles as input for OBIA to identify adult O. leucoryx, applied to UAV acquired imagery. Our method uses lab-measured spectral reflection of hair sample values, collected from captive O. leucoryx as an input for OBIA ruleset to identify adult O. leucoryx from UAV survey imagery using semi-automated supervised classification. The converted mean CIE Lab reflective spectrometry colour values of n = 50 hair samples of adult O. leucoryx to 8-bit sRGB-colour profiles of the species resulted in the red-band value of 157.450, the green-band value of 151.390 and blue-band value of 140.832. The sRGB values and a minimum size permitter were added as the input of the OBIA ruleset identified adult O. leucoryx with a high degree of efficiency when applied to three UAV census datasets. Using species sRGB-colour profiles to identify re-introduced O. leucoryx and extract location data using a non-invasive UAV-based tool is a novel method with enormous application possibilities. Coat refection sRGB-colour profiles can be developed for a range of species and customised to autodetect and classify the species from remote sensing data.The Czech University of Life Sciences Prague and by the Ministry of Education, Youth and Sports, Czechia.https://www.elsevier.com/locate/jenvmanhj2023Veterinary Tropical Disease

    A Comprehensive Review of AI-enabled Unmanned Aerial Vehicle: Trends, Vision , and Challenges

    Full text link
    In recent years, the combination of artificial intelligence (AI) and unmanned aerial vehicles (UAVs) has brought about advancements in various areas. This comprehensive analysis explores the changing landscape of AI-powered UAVs and friendly computing in their applications. It covers emerging trends, futuristic visions, and the inherent challenges that come with this relationship. The study examines how AI plays a role in enabling navigation, detecting and tracking objects, monitoring wildlife, enhancing precision agriculture, facilitating rescue operations, conducting surveillance activities, and establishing communication among UAVs using environmentally conscious computing techniques. By delving into the interaction between AI and UAVs, this analysis highlights the potential for these technologies to revolutionise industries such as agriculture, surveillance practices, disaster management strategies, and more. While envisioning possibilities, it also takes a look at ethical considerations, safety concerns, regulatory frameworks to be established, and the responsible deployment of AI-enhanced UAV systems. By consolidating insights from research endeavours in this field, this review provides an understanding of the evolving landscape of AI-powered UAVs while setting the stage for further exploration in this transformative domain

    Monitoring system for detecting the motility and position of laboratory animals after anesthesia

    Get PDF
    Tato Diplomová práce, která nese název „Monitorovací systém pro zjištění motility a polohy laboratorních zvířat po anestézii“ se zaměřuje na návrh a realizaci bezkontaktní detekce polohy laboratorního potkana nebo myši ve výběhu s průhledným krytem. Semestrální práce si klade za cíl nalézt vhodné metody realizaci bezkontaktní detekce polohy laboratorního potkana nebo myši a automaticky určit a zobrazit průměrnou rychlost nebo jiné charakteristiky pohybu. Zadání vzešlo z potřeb monitoringu zvířat po kurativním zásahu a také jako potřebná utilita pro budoucí „stínování“ pohyb zvířete (automatické cílení na jizvu na zádech zvířete). Potkan, který je umístěný uvnitř našeho výběhu je buď standardně pohyblivý nebo je omámen po anestezií. V této práci se zabývám nejprve rešerší automatických monitorovacích systémů pro detekci polohy zvířat ve výběhu. Pak v praktické části jsou testovány tři typy kamer pro obrazovou detekci polohy potkana a je navrhnut skript pro automatickou detekci a analýzu pohybu potkana. Systém funguje jako kamerové oko které v reálnem času v svém zorném poli schopno najit plochu černého boxu následně omezit plochu detekci o velikosti teto krabice a nasledne v omezenem prostoru automaticky detekuje težište a vzpočitava cestu kterou udelava ten bod , jako stitistiku uklada cestu za deasat sekund a z toho vzpočitava přumernou rzchlost potkanu za tu dobu .A hodnocenim získanou rzchlosti s průměrem vypočtenzm s testev na 10 myšich - hlasi na obrazovce stav myši za predešlych desat sekund.Vytvořeny software detekuje bilou myš nebo potkana v černém hovnem boxu , bez doplňkového označeni zvířeti pomoci markéru pro žádné stresovaní zvířeti . Potkan, který je umístěný uvnitř našeho výběhu je buď standardně pohyblivý nebo je omámen po anestezií. V této práci se zabývám nejprve rešerší automatických monitorovacích systémů pro detekci polohy zvířat ve výběhu. Pak v praktické části jsou testovány tři typy kamer pro obrazovou detekci polohy potkana a je navrhnut skript pro automatickou detekci a analýzu pohybu potkana. A vytvořen software na detekciThis diploma thesis, entitled "Monitoring System for Determination of Motility and Position of Laboratory Animals After Anesthesia", focuses on the design and implementation of contactless detection of the position of a rat or mouse in an enclosure with a transparent cover. The aim of the semester work is to find suitable methods of realization of contactless detection of rat or mouse position and to automatically determine and display average speed or other movement characteristics. The assignment arose from the needs of animal monitoring after curative intervention and also as a necessary utility for future "shading" animal movement (automatic targeting of the scar on the animal's back). The rat, which is located inside our enclosure, is either moving as standard or is dazed after anesthesia. In this work I deal first with search of automatic monitoring systems for detection of animals in the enclosure. Then in the practical part are tested three types of cameras for visual detection of rat position and a script for automatic detection and analysis of rat movement is designed. The system works like a camera eye which in real time is able to find the area of a black box in its field of view and then limit the detection area to the size of this box and then automatically detects the center of gravity and counts. and evaluates the obtained speed with an average calculated with a test of 10 mice - voices on the screen the mouse status in the previous ten seconds. for no stressed animal The rat that is located inside our enclosure is either standard or movable after anesthesia. In this work I first deal with searches of automatic monitoring systems for detecting the position of animals in the enclosure. Then, in the practical part, I test three types of cameras for image detection of rat position. Evaluation software for motion analysis will largely be solved in the follow-up diploma thesis.Project made like monitoring and detecting software based on OpenCV.

    From crowd to herd counting: How to precisely detect and count African mammals using aerial imagery and deep learning?

    Full text link
    peer reviewedRapid growth of human populations in sub-Saharan Africa has led to a simultaneous increase in the number of livestock, often leading to conflicts of use with wildlife in protected areas. To minimize these conflicts, and to meet both communities’ and conservation goals, it is therefore essential to monitor livestock density and their land use. This is usually done by conducting aerial surveys during which aerial images are taken for later counting. Although this approach appears to reduce counting bias, the manual processing of images is timeconsuming. The use of dense convolutional neural networks (CNNs) has emerged as a very promising avenue for processing such datasets. However, typical CNN architectures have detection limits for dense herds and closeby animals. To tackle this problem, this study introduces a new point-based CNN architecture, HerdNet, inspired by crowd counting. It was optimized on challenging oblique aerial images containing herds of camels (Camelus dromedarius), donkeys (Equus asinus), sheep (Ovis aries) and goats (Capra hircus), acquired over heterogeneous arid landscapes of the Ennedi reserve (Chad). This approach was compared to an anchor-based architecture, Faster-RCNN, and a density-based, adapted version of DLA-34 that is typically used in crowd counting. HerdNet achieved a global F1 score of 73.6 % on 24 megapixels images, with a root mean square error of 9.8 animals and at a processing speed of 3.6 s, outperforming the two baselines in terms of localization, counting and speed. It showed better proximity-invariant precision while maintaining equivalent recall to that of Faster-RCNN, thus demonstrating that it is the most suitable approach for detecting and counting large mammals at close range. The only limitation of HerdNet was the slightly weaker identification of species, with an average confusion rate approximately 4 % higher than that of Faster-RCNN. This study provides a new CNN architecture that could be used to develop an automatic livestock counting tool in aerial imagery. The reduced image analysis time could motivate more frequent flights, thus allowing a much finer monitoring of livestock and their land use

    Fast animal detection in UAV images using convolutional neural networks

    No full text
    Illegal wildlife poaching poses one severe threat to the environment. Measures to stem poaching have only been with limited success, mainly due to efforts required to keep track of wildlife stock and animal tracking. Recent developments in remote sensing have led to low-cost Unmanned Aerial Vehicles (UAVs), facilitating quick and repeated image acquisitions over vast areas. In parallel, progress in object detection in computer vision yielded unprecedented performance improvements, partially attributable to algorithms like Convolutional Neural Networks (CNNs). We present an object detection method tailored to detect large animals in UAV images. We achieve a substantial increase in precision over a robust state-of-the-art model on a dataset acquired over the Kuzikus wildlife reserve park in Namibia. Furthermore, our model processes data at over 72 images per second, as opposed 3 for the baseline, allowing for real-time applications.</p
    corecore