64 research outputs found

    Spectral Band Selection for Ensemble Classification of Hyperspectral Images with Applications to Agriculture and Food Safety

    Get PDF
    In this dissertation, an ensemble non-uniform spectral feature selection and a kernel density decision fusion framework are proposed for the classification of hyperspectral data using a support vector machine classifier. Hyperspectral data has more number of bands and they are always highly correlated. To utilize the complete potential, a feature selection step is necessary. In an ensemble situation, there are mainly two challenges: (1) Creating diverse set of classifiers in order to achieve a higher classification accuracy when compared to a single classifier. This can either be achieved by having different classifiers or by having different subsets of features for each classifier in the ensemble. (2) Designing a robust decision fusion stage to fully utilize the decision produced by individual classifiers. This dissertation tests the efficacy of the proposed approach to classify hyperspectral data from different applications. Since these datasets have a small number of training samples with larger number of highly correlated features, conventional feature selection approaches such as random feature selection cannot utilize the variability in the correlation level between bands to achieve diverse subsets for classification. In contrast, the approach proposed in this dissertation utilizes the variability in the correlation between bands by dividing the spectrum into groups and selecting bands from each group according to its size. The intelligent decision fusion proposed in this approach uses the probability density of training classes to produce a final class label. The experimental results demonstrate the validity of the proposed framework that results in improvements in the overall, user, and producer accuracies compared to other state-of-the-art techniques. The experiments demonstrate the ability of the proposed approach to produce more diverse feature selection over conventional approaches

    Towards Cost Effective Autonomous Mapping of Invasive Aquatic Plants Using Deep Learning and a GPU Enabled Microcomputer

    Get PDF
    We present a dataset for the autonomous identification of invasive aquatic plant species using deep learning techniques. The dataset includes high-resolution images captured using a Canon EOS REBEL T3 DSLR camera for model training and low-resolution images captured with a Raspberry Pi camera for evaluating model performance under real-world conditions. Eight invasive aquatic plant species: Alternanthera philoxeroides, Cyperus blepharoleptos, Salvinia molesta, Ludwigia peploides, Panicum repens, Pontederia crassipes, Pistia stratiotes, and Nymphaea odorata were cultivated in mesocosms at the R.R. Foil Plant Research Center, Mississippi State University. The controlled environment of mesocosms serve as an intermediate between lab and field studies, providing insights into natural conditions before field testing. For training dataset, we captured 1,963 high-resolution images under natural lighting and various orientations, with 150–200 images per species. And to evaluate the models\u27 robustness in real-world scenarios, a testing dataset of 128 images with lower resolution and color accuracy was created using a Raspberry Pi camera

    Honey Bee Image Dataset for Machine Learning and Computer Vision Model Building

    Get PDF
    The dataset consists of 4,590 frames distributed across 15 different video recordings representing diverse lighting and seasonal conditions of honey bee activity. These frames were labeled in three classes: non pollen carrying worker honey bees, pollen carrying worker honey bees, and drone bees. The frames were extracted from videos captured using GoPro Hero 9 and Hero 11 cameras in a research apiary at Mississippi State University. The video recording process was accompanied with data collection of environmental factors such as temperature, humidity, wind direction and speed, solar radiation and other weather conditions. It provides contextual information related to bee behavior for every video recording. Annotation of the extracted frames from the dataset was done by a human expert, following strict guidelines to ensure precision and consistency. The dataset can be used by researchers for training honey bee detection computer vision models aimed at automating bee detection and classification for monitoring tasks as well as honey bee behavior analysis, proving insight into hive activity and foraging patterns

    Estimating Waterbird Abundance on Catfish Aquaculture Ponds Using an Unmanned Aerial System

    Get PDF
    In this study, we examined the use of an unmanned aerial system (UAS) to monitor fish-eating birds on catfish (Ictalurus spp.) aquaculture facilities in Mississippi, USA. We tested 2 automated computer algorithms to identify bird species using mosaicked imagery taken from a UAS platform. One algorithm identified birds based on color alone (color segmentation), and the other algorithm used shape recognition (template matching), and the results of each algorithm were compared directly to manual counts of the same imagery. We captured digital imagery of great egrets (Ardea alba), great blue herons (A. herodias), and double-crested cormorants (Phalacrocorax auritus) on aquaculture facilities in Mississippi. When all species were combined, template matching algorithm produced an average accuracy of 0.80 (SD = 0.58), and color segmentation algorithm produced an average accuracy of 0.67 (SD = 0.67), but each was highly dependent on weather, image quality, habitat characteristics, and characteristics of the birds themselves. Egrets were successfully counted using both color segmentation and template matching. Template matching performed best for great blue herons compared to color segmentation, and neither algorithm performed well for cormorants. Although the computer-guided identification in this study was highly variable, UAS show promise as an alternative monitoring tool for birds at aquaculture facilities

    Improving animal monitoring using small unmanned aircraft systems (sUAS) and deep learning networks

    Get PDF
    In recent years, small unmanned aircraft systems (sUAS) have been used widely to monitor animals because of their customizability, ease of operating, ability to access difficult to navigate places, and potential to minimize disturbance to animals. Automatic identification and classification of animals through images acquired using a sUAS may solve critical problems such as monitoring large areas with high vehicle traffic for animals to prevent collisions, such as animal-aircraft collisions on airports. In this research we demonstrate automated identification of four animal species using deep learning animal classification models trained on sUAS collected images. We used a sUAS mounted with visible spectrum cameras to capture 1288 images of four different animal species: cattle (Bos taurus), horses (Equus caballus), Canada Geese (Branta canadensis), and white-tailed deer (Odocoileus virginianus). We chose these animals because they were readily accessible and whitetailed deer and Canada Geese are considered aviation hazards, as well as being easily identifiable within aerial imagery. A four-class classification problem involving these species was developed from the acquired data using deep learning neural networks. We studied the performance of two deep neural network models, convolutional neural networks (CNN) and deep residual networks (ResNet). Results indicate that the ResNet model with 18 layers, ResNet 18, may be an effective algorithm at classifying between animals while using a relatively small number of training samples. The best ResNet architecture produced a 99.18% overall accuracy (OA) in animal identification and a Kappa statistic of 0.98. The highest OA and Kappa produced by CNN were 84.55% and 0.79 respectively. These findings suggest that ResNet is effective at distinguishing among the four species tested and shows promise for classifying larger datasets of more diverse animals

    Fusion of visible and thermal images improves automated detection and classification of animals for drone surveys

    Get PDF
    Visible and thermal images acquired from drones (unoccupied aircraft systems) have substantially improved animal monitoring. Combining complementary information from both image types provides a powerful approach for automating detection and classification of multiple animal species to augment drone surveys. We compared eight image fusion methods using thermal and visible drone images combined with two supervised deep learning models, to evaluate the detection and classification of white-tailed deer (Odocoileus virginianus), domestic cow (Bos taurus), and domestic horse (Equus caballus). We classified visible and thermal images separately and compared them with the results of image fusion. Fused images provided minimal improvement for cows and horses compared to visible images alone, likely because the size, shape, and color of these species made them conspicuous against the background. For white-tailed deer, which were typically cryptic against their backgrounds and often in shadows in visible images, the added information from thermal images improved detection and classification in fusion methods from 15 to 85%. Our results suggest that image fusion is ideal for surveying animals inconspicuous from their backgrounds, and our approach uses few image pairs to train compared to typical machine-learning methods. We discuss computational and field considerations to improve drone surveys using our fusion approach. Supplemental files attached below

    Predicting select soil health genes using hyperspectral reflectance in nematode-infected and drought stressed greenhouse cotton

    Get PDF
    IntroductionPredicting, or correlating, soil microbiome metrics with above ground phenotypic plant measurements would enable rapid diagnosis of soil microbiome imbalances. Rapid plant measurements through remote sensing are a leading innovation in agriculture and have reduced the need for labor-intensive plant and soil measurements. In the current study we utilized cotton (Gossypium hirsutum) as a plant model whereby stress was induced by drought and root-knot nematode (RKN; Meloidogyne incognita) infection to induce a change in the soil microbiome which would be reflected as a plant phenotypic response.MethodsThe experiment was a randomized complete block design with two cotton genotypes (RKN-susceptible or RKN-resistant) and four stress combinations. Rootzone samples were collected upon plant termination and quantified for five soil health genes: 16S rRNA, 18S rRNA, ureC, phoA, and cbbLR. Plant physiology, biomass, and remote sensing hyperspectral readings were previously reported. Results and discussionOverall, RKN infection and plant genotype treatments had little effect on genes. Interestingly, drought stress increased most gene abundances, while plant physiological and biomass measurements decreased, indicating microbiome response to plant stress. Hyperspectral reflectance, through machine learning, accurately predicted the presence of drought stress with an area under the receiver operating characteristic curve value of 0.864. Furthermore, the readings were able to predict the abundance values for all genes except 18S rRNA within one standard deviation of ground truth levels. This study demonstrated that there are key plant characteristics that are registered via hyperspectral wavelengths which can be used to accurately predict soil health gene abundance. While the use of hyperspectral readings and soil microbiome status to inform plant health and vice versa are still in their infancy, the current study provides us with future directions towards this end

    Dataset for Controllable factors affecting accuracy and precision of human identification of animals from drone imagery

    Get PDF
    Dataset from the results of an experiment to determine how three controllable factors, flight altitude, camera angle, and time of day, affect human identification and counts of animals from drone images to inform best practices to survey animal communities with drones. We used a drone (unoccupied aircraft system, or UAS) to survey known numbers of eight animal decoy species, representing a range of body sizes and colors, at four GSD (ground sampling distance) values (0.35, 0.70, 1.06, 1.41 cm/pixel) representing equivalent flight altitudes (15.2, 30.5, 45.7, 61.0 m) at two camera angles (45° and 90°) and across a range of times of day (morning to late afternoon). Expert human observers identified and counted animals in drone images to determine how the three controllable factors affected accuracy and precision. Observer precision was high and unaffected by tested factors. However, results for observer accuracy revealed an interaction among all three controllable factors. Increasing flight altitude resulted in decreased accuracy in animal counts overall; however, accuracy was best at midday compared to morning and afternoon hours, when decoy and structure shadows were present or more pronounced. Surprisingly, the 45° camera enhanced accuracy compared to 90°, but only when animals were most difficult to identify and count, such as at higher flight altitudes or during the early morning and late afternoon. We provide recommendations based on our results to design future surveys to improve human accuracy in identifying and counting animals from drone images for monitoring animal populations and communities

    Post-Logging Estimation of Loblolly Pine (Pinus taeda) Stump Size, Area and Population Using Imagery from a Small Unmanned Aerial System

    No full text
    This study describes an unmanned aerial system (UAS) method for accurately estimating the number and diameters of harvested Loblolly Pine (Pinus taeda) stumps in a final harvest (often referred as clear-cut) situation. The study methods are potentially useful in initial detection, quantification of area and volume estimation of legal or illegal logging events to help estimate the volumes and value of removed pine timber. The study sites used included three adjacent pine stands in East-Central Mississippi. Using image pattern recognition algorithms, results show a counting accuracy of 77.3% and RMSE of 4.3 cm for stump diameter estimation. The study also shows that the area can be accurately estimated from the UAS collected data. Our experimental study shows that the proposed UAS survey method has the potential for wide use as a monitoring or investigation tool in the forestry and land management industries

    Feature Extraction in 5G Wireless Systems: A Quantum Cat Swarm and Wavelet-Based Approach

    No full text
    This paper represents a new method for the extraction of features from 5G signals using spectrogram and quantum cat swarm optimization (QCSO). The proposed approach uses a discrete wavelet transform (DWT)-based convolutional neural network (W-CNN) to enhance the extracted features and improve the signal classification. The combination of QCSO and W-CNN is designed to enable improved signal recognition and dimension reduction. Our results demonstrate an improvement in the 5G signal feature extraction performance with the use of this novel approach. The QCSO shows improvement in seven out of eight parameters studied when compared to five other state-of-the-art optimization methods
    corecore