20 research outputs found

    Spectral Band Selection for Ensemble Classification of Hyperspectral Images with Applications to Agriculture and Food Safety

    Get PDF
    In this dissertation, an ensemble non-uniform spectral feature selection and a kernel density decision fusion framework are proposed for the classification of hyperspectral data using a support vector machine classifier. Hyperspectral data has more number of bands and they are always highly correlated. To utilize the complete potential, a feature selection step is necessary. In an ensemble situation, there are mainly two challenges: (1) Creating diverse set of classifiers in order to achieve a higher classification accuracy when compared to a single classifier. This can either be achieved by having different classifiers or by having different subsets of features for each classifier in the ensemble. (2) Designing a robust decision fusion stage to fully utilize the decision produced by individual classifiers. This dissertation tests the efficacy of the proposed approach to classify hyperspectral data from different applications. Since these datasets have a small number of training samples with larger number of highly correlated features, conventional feature selection approaches such as random feature selection cannot utilize the variability in the correlation level between bands to achieve diverse subsets for classification. In contrast, the approach proposed in this dissertation utilizes the variability in the correlation between bands by dividing the spectrum into groups and selecting bands from each group according to its size. The intelligent decision fusion proposed in this approach uses the probability density of training classes to produce a final class label. The experimental results demonstrate the validity of the proposed framework that results in improvements in the overall, user, and producer accuracies compared to other state-of-the-art techniques. The experiments demonstrate the ability of the proposed approach to produce more diverse feature selection over conventional approaches

    Estimating Waterbird Abundance on Catfish Aquaculture Ponds Using an Unmanned Aerial System

    Get PDF
    In this study, we examined the use of an unmanned aerial system (UAS) to monitor fish-eating birds on catfish (Ictalurus spp.) aquaculture facilities in Mississippi, USA. We tested 2 automated computer algorithms to identify bird species using mosaicked imagery taken from a UAS platform. One algorithm identified birds based on color alone (color segmentation), and the other algorithm used shape recognition (template matching), and the results of each algorithm were compared directly to manual counts of the same imagery. We captured digital imagery of great egrets (Ardea alba), great blue herons (A. herodias), and double-crested cormorants (Phalacrocorax auritus) on aquaculture facilities in Mississippi. When all species were combined, template matching algorithm produced an average accuracy of 0.80 (SD = 0.58), and color segmentation algorithm produced an average accuracy of 0.67 (SD = 0.67), but each was highly dependent on weather, image quality, habitat characteristics, and characteristics of the birds themselves. Egrets were successfully counted using both color segmentation and template matching. Template matching performed best for great blue herons compared to color segmentation, and neither algorithm performed well for cormorants. Although the computer-guided identification in this study was highly variable, UAS show promise as an alternative monitoring tool for birds at aquaculture facilities

    Improving animal monitoring using small unmanned aircraft systems (sUAS) and deep learning networks

    Get PDF
    In recent years, small unmanned aircraft systems (sUAS) have been used widely to monitor animals because of their customizability, ease of operating, ability to access difficult to navigate places, and potential to minimize disturbance to animals. Automatic identification and classification of animals through images acquired using a sUAS may solve critical problems such as monitoring large areas with high vehicle traffic for animals to prevent collisions, such as animal-aircraft collisions on airports. In this research we demonstrate automated identification of four animal species using deep learning animal classification models trained on sUAS collected images. We used a sUAS mounted with visible spectrum cameras to capture 1288 images of four different animal species: cattle (Bos taurus), horses (Equus caballus), Canada Geese (Branta canadensis), and white-tailed deer (Odocoileus virginianus). We chose these animals because they were readily accessible and whitetailed deer and Canada Geese are considered aviation hazards, as well as being easily identifiable within aerial imagery. A four-class classification problem involving these species was developed from the acquired data using deep learning neural networks. We studied the performance of two deep neural network models, convolutional neural networks (CNN) and deep residual networks (ResNet). Results indicate that the ResNet model with 18 layers, ResNet 18, may be an effective algorithm at classifying between animals while using a relatively small number of training samples. The best ResNet architecture produced a 99.18% overall accuracy (OA) in animal identification and a Kappa statistic of 0.98. The highest OA and Kappa produced by CNN were 84.55% and 0.79 respectively. These findings suggest that ResNet is effective at distinguishing among the four species tested and shows promise for classifying larger datasets of more diverse animals

    Fusion of visible and thermal images improves automated detection and classification of animals for drone surveys

    Get PDF
    Visible and thermal images acquired from drones (unoccupied aircraft systems) have substantially improved animal monitoring. Combining complementary information from both image types provides a powerful approach for automating detection and classification of multiple animal species to augment drone surveys. We compared eight image fusion methods using thermal and visible drone images combined with two supervised deep learning models, to evaluate the detection and classification of white-tailed deer (Odocoileus virginianus), domestic cow (Bos taurus), and domestic horse (Equus caballus). We classified visible and thermal images separately and compared them with the results of image fusion. Fused images provided minimal improvement for cows and horses compared to visible images alone, likely because the size, shape, and color of these species made them conspicuous against the background. For white-tailed deer, which were typically cryptic against their backgrounds and often in shadows in visible images, the added information from thermal images improved detection and classification in fusion methods from 15 to 85%. Our results suggest that image fusion is ideal for surveying animals inconspicuous from their backgrounds, and our approach uses few image pairs to train compared to typical machine-learning methods. We discuss computational and field considerations to improve drone surveys using our fusion approach. Supplemental files attached below

    Dataset for Controllable factors affecting accuracy and precision of human identification of animals from drone imagery

    Get PDF
    Dataset from the results of an experiment to determine how three controllable factors, flight altitude, camera angle, and time of day, affect human identification and counts of animals from drone images to inform best practices to survey animal communities with drones. We used a drone (unoccupied aircraft system, or UAS) to survey known numbers of eight animal decoy species, representing a range of body sizes and colors, at four GSD (ground sampling distance) values (0.35, 0.70, 1.06, 1.41 cm/pixel) representing equivalent flight altitudes (15.2, 30.5, 45.7, 61.0 m) at two camera angles (45° and 90°) and across a range of times of day (morning to late afternoon). Expert human observers identified and counted animals in drone images to determine how the three controllable factors affected accuracy and precision. Observer precision was high and unaffected by tested factors. However, results for observer accuracy revealed an interaction among all three controllable factors. Increasing flight altitude resulted in decreased accuracy in animal counts overall; however, accuracy was best at midday compared to morning and afternoon hours, when decoy and structure shadows were present or more pronounced. Surprisingly, the 45° camera enhanced accuracy compared to 90°, but only when animals were most difficult to identify and count, such as at higher flight altitudes or during the early morning and late afternoon. We provide recommendations based on our results to design future surveys to improve human accuracy in identifying and counting animals from drone images for monitoring animal populations and communities

    Post-Logging Estimation of Loblolly Pine (Pinus taeda) Stump Size, Area and Population Using Imagery from a Small Unmanned Aerial System

    No full text
    This study describes an unmanned aerial system (UAS) method for accurately estimating the number and diameters of harvested Loblolly Pine (Pinus taeda) stumps in a final harvest (often referred as clear-cut) situation. The study methods are potentially useful in initial detection, quantification of area and volume estimation of legal or illegal logging events to help estimate the volumes and value of removed pine timber. The study sites used included three adjacent pine stands in East-Central Mississippi. Using image pattern recognition algorithms, results show a counting accuracy of 77.3% and RMSE of 4.3 cm for stump diameter estimation. The study also shows that the area can be accurately estimated from the UAS collected data. Our experimental study shows that the proposed UAS survey method has the potential for wide use as a monitoring or investigation tool in the forestry and land management industries

    Mississippi Sky Conditions

    No full text
    This dataset consists of approximately 13,000 jpg format images. These images were collected using consumer grade trail cameras manufactured by Browning Trail Cameras. Cameras were installed across Mississippi (USA) in 2019 and 2020 from March through September. Images collected are exclusively oblique, unobstructed views of the sky. Cameras were placed in time-lapse mode and set to collect one image every hour. Our intent in this work was to first compare deep learning approaches to classify sky conditions with regard to cloud shadows in agricultural fields using a visible spectrum camera. Sky conditions, and specifically shadowing from clouds, are critical determinants in the quality of images that can be obtained from low-altitude sensing platforms. Radiometric quality of remotely sensed imagery is crucial for precision agriculture applications because estimations of plant health rely on the underlying quality. Using this dataset, we then developed an artificial-intelligence-based edge computing system to fully automate the classification process

    Conservation Database for the Gulf Coast Region of the United States

    No full text
    Strategic, data-driven conservation approaches are gaining popularity. A high-resolution geospatial database indicating the ecosystem functions and socioeconomic activity can be very useful for any conservation expert or funding agency. This database presents the developed measures that are derived from openly available geospatial and non-geospatial data sources, and is intended to provide ecological and socioeconomic evidence to support conservation planning efforts along the Gulf Coast Region of the United States. This database was developed by the Strategic Conservation Assessment of Gulf Coast Landscapes (SCA) Project, which is building a series of online tools that can aid in conservation planning efforts along the Gulf Coast Region of the United States. See the following links for more information about the SCA Project (http://www.landscope.org/gulfcoast) and the project\u27s online tools (https://github.com/scatools)

    Automated Hyperspectral Feature Selection and Classification of Wildlife Using Uncrewed Aerial Vehicles

    No full text
    Timely and accurate detection and estimation of animal abundance is an important part of wildlife management. This is particularly true for invasive species where cost-effective tools are needed to enable landscape-scale surveillance and management responses, especially when targeting low-density populations residing in dense vegetation and under canopies. This research focused on investigating the feasibility and practicality of using uncrewed aerial systems (UAS) and hyperspectral imagery (HSI) to classify animals in the wild on a spectral—rather than spatial—basis, in the hopes of developing methods to accurately classify animal targets even when their form may be significantly obscured. We collected HSI of four species of large mammals reported as invasive species on islands: cow (Bos taurus), horse (Equus caballus), deer (Odocoileus virginianus), and goat (Capra hircus) from a small UAS. Our objectives of this study were to (a) create a hyperspectral library of the four mammal species, (b) study the efficacy of HSI for animal classification by only using the spectral information via statistical separation, (c) study the efficacy of sequential and deep learning neural networks to classify the HSI pixels, (d) simulate five-band multispectral data from HSI and study its effectiveness for automated supervised classification, and (e) assess the ability of using HSI for invasive wildlife detection. Image classification models using sequential neural networks and one-dimensional convolutional neural networks were developed and tested. The results showed that the information from HSI derived using dimensionality reduction techniques were sufficient to classify the four species with class F1 scores all above 0.85. The performances of some classifiers were capable of reaching an overall accuracy over 98%and class F1 scores above 0.75, thus using only spectra to classify animals to species from existing sensors is feasible. This study discovered various challenges associated with the use of HSI for animal detection, particularly intra-class and seasonal variations in spectral reflectance and the practicalities of collecting and analyzing HSI data over large meaningful areas within an operational context. To make the use of spectral data a practical tool for wildlife and invasive animal management, further research into spectral profiles under a variety of real-world conditions, optimization of sensor spectra selection, and the development of on-board real-time analytics are needed

    Post-Logging Estimation of Loblolly Pine (Pinus taeda) Stump Size, Area and Population Using Imagery from a Small Unmanned Aerial System

    No full text
    This study describes an unmanned aerial system (UAS) method for accurately estimating the number and diameters of harvested Loblolly Pine (Pinus taeda) stumps in a final harvest (often referred as clear-cut) situation. The study methods are potentially useful in initial detection, quantification of area and volume estimation of legal or illegal logging events to help estimate the volumes and value of removed pine timber. The study sites used included three adjacent pine stands in East-Central Mississippi. Using image pattern recognition algorithms, results show a counting accuracy of 77.3% and RMSE of 4.3 cm for stump diameter estimation. The study also shows that the area can be accurately estimated from the UAS collected data. Our experimental study shows that the proposed UAS survey method has the potential for wide use as a monitoring or investigation tool in the forestry and land management industries
    corecore