4 research outputs found

    Improving animal monitoring using small unmanned aircraft systems (sUAS) and deep learning networks

    Get PDF
    In recent years, small unmanned aircraft systems (sUAS) have been used widely to monitor animals because of their customizability, ease of operating, ability to access difficult to navigate places, and potential to minimize disturbance to animals. Automatic identification and classification of animals through images acquired using a sUAS may solve critical problems such as monitoring large areas with high vehicle traffic for animals to prevent collisions, such as animal-aircraft collisions on airports. In this research we demonstrate automated identification of four animal species using deep learning animal classification models trained on sUAS collected images. We used a sUAS mounted with visible spectrum cameras to capture 1288 images of four different animal species: cattle (Bos taurus), horses (Equus caballus), Canada Geese (Branta canadensis), and white-tailed deer (Odocoileus virginianus). We chose these animals because they were readily accessible and whitetailed deer and Canada Geese are considered aviation hazards, as well as being easily identifiable within aerial imagery. A four-class classification problem involving these species was developed from the acquired data using deep learning neural networks. We studied the performance of two deep neural network models, convolutional neural networks (CNN) and deep residual networks (ResNet). Results indicate that the ResNet model with 18 layers, ResNet 18, may be an effective algorithm at classifying between animals while using a relatively small number of training samples. The best ResNet architecture produced a 99.18% overall accuracy (OA) in animal identification and a Kappa statistic of 0.98. The highest OA and Kappa produced by CNN were 84.55% and 0.79 respectively. These findings suggest that ResNet is effective at distinguishing among the four species tested and shows promise for classifying larger datasets of more diverse animals

    Airborne Hyperspectral Imagery for Band Selection Using Moth–Flame Metaheuristic Optimization

    No full text
    In this research, we study a new metaheuristic algorithm called Moth–Flame Optimization (MFO) for hyperspectral band selection. With the hundreds of highly correlated narrow spectral bands, the number of training samples required to train a statistical classifier is high. Thus, the problem is to select a subset of bands without compromising the classification accuracy. One of the ways to solve this problem is to model an objective function that measures class separability and utilize it to arrive at a subset of bands. In this research, we studied MFO to select optimal spectral bands for classification. MFO is inspired by the behavior of moths with respect to flames, which is the navigation method of moths in nature called transverse orientation. In MFO, a moth navigates the search space through a process called transverse orientation by keeping a constant angle with the Moon, which is a compelling strategy for traveling long distances in a straight line, considering that the Moon’s distance from the moth is considerably long. Our research tested MFO on three benchmark hyperspectral datasets—Indian Pines, University of Pavia, and Salinas. MFO produced an Overall Accuracy (OA) of 88.98%, 94.85%, and 97.17%, respectively, on the three datasets. Our experimental results indicate that MFO produces better OA and Kappa when compared to state-of-the-art band selection algorithms such as particle swarm optimization, grey wolf, cuckoo search, and genetic algorithms. The analysis results prove that the proposed approach effectively addresses the spectral band selection problem and provides a high classification accuracy

    Evidence on the effectiveness of small unmanned aircraft systems (sUAS) as a survey tool for North American terrestrial, vertebrate animals: a systematic map protocol

    Get PDF
    Background: Small unmanned aircraft systems (sUAS) are replacing or supplementing manned aircraft and groundbased surveys in many animal monitoring situations due to better coverage at finer spatial and temporal resolutions, access, cost, bias, impacts, safety, efficiency, and logistical benefits. Various sUAS models and sensors are available with varying features and usefulness depending on survey goals. However, justification for selection of sUAS and sensors are not typically offered in published literature and existing reviews do not adequately cover past and current sUAS applications for animal monitoring nor their associated sUAS model and sensor technologies, taxonomic and geographic scope, flight conditions and considerations, spatial distributions of sUAS applications, and reported technical difficulties. We outline a systematic map protocol to collect and consolidate evidence pertaining to sUAS monitoring of animals. Our systematic map will provide a useful synthesis of current applications of sUAS-animal related studies and identify major knowledge clusters (well-represented subtopics that are amenable to full synthesis by a systematic review) and gaps (unreported or underrepresented topics that warrant additional primary research) that may influence future research directions and sUAS applications. Methods: Our systematic map will investigate the current state of knowledge using an accurate, comprehensive, and repeatable search. We will find relevant peer-reviewed and grey literature as well as dissertations and theses using online publication databases, Google Scholar, and by request through a professional network of collaborators and publicly available websites. We will use a tiered approach to article exclusion with eligible studies being those that monitor (i.e., identify, count, estimate, etc.) terrestrial vertebrate animals. Extracted data concerning sUAS, sensors, animals, methodology, and results will be recorded in Microsoft Access. We will query and catalogue evidence in the final database to produce tables, figures, and geographic maps to accompany a full narrative review that answers our primary and secondary questions

    Real-Time Automated Classification of Sky Conditions Using Deep Learning and Edge Computing

    No full text
    The radiometric quality of remotely sensed imagery is crucial for precision agriculture applications because estimations of plant health rely on the underlying quality. Sky conditions, and specifically shadowing from clouds, are critical determinants in the quality of images that can be obtained from low-altitude sensing platforms. In this work, we first compare common deep learning approaches to classify sky conditions with regard to cloud shadows in agricultural fields using a visible spectrum camera. We then develop an artificial-intelligence-based edge computing system to fully automate the classification process. Training data consisting of 100 oblique angle images of the sky were provided to a convolutional neural network and two deep residual neural networks (ResNet18 and ResNet34) to facilitate learning two classes, namely (1) good image quality expected, and (2) degraded image quality expected. The expectation of quality stemmed from the sky condition (i.e., density, coverage, and thickness of clouds) present at the time of the image capture. These networks were tested using a set of 13,000 images. Our results demonstrated that ResNet18 and ResNet34 classifiers produced better classification accuracy when compared to a convolutional neural network classifier. The best overall accuracy was obtained by ResNet34, which was 92% accurate, with a Kappa statistic of 0.77. These results demonstrate a low-cost solution to quality control for future autonomous farming systems that will operate without human intervention and supervision
    corecore