34 research outputs found
An Unsupervised Algorithm for Change Detection in Hyperspectral Remote Sensing Data Using Synthetically Fused Images and Derivative Spectral Profiles
Multitemporal hyperspectral remote sensing data have the potential to detect altered areas on the earth’s surface. However, dissimilar radiometric and geometric properties between the multitemporal data due to the acquisition time or position of the sensors should be resolved to enable hyperspectral imagery for detecting changes in natural and human-impacted areas. In addition, data noise in the hyperspectral imagery spectrum decreases the change-detection accuracy when general change-detection algorithms are applied to hyperspectral images. To address these problems, we present an unsupervised change-detection algorithm based on statistical analyses of spectral profiles; the profiles are generated from a synthetic image fusion method for multitemporal hyperspectral images. This method aims to minimize the noise between the spectra corresponding to the locations of identical positions by increasing the change-detection rate and decreasing the false-alarm rate without reducing the dimensionality of the original hyperspectral data. Using a quantitative comparison of an actual dataset acquired by airborne hyperspectral sensors, we demonstrate that the proposed method provides superb change-detection results relative to the state-of-the-art unsupervised change-detection algorithms
Identification of Brush Species and Herbicide Effect Assessment in Southern Texas Using an Unoccupied Aerial System (UAS)
Cultivation and grazing since the mid-nineteenth century in Texas has caused dramatic changes in grassland vegetation. Among these changes is the encroachment of native and introduced brush species. The distribution and quantity of brush can affect livestock production and water holding capacity of soil. Still, at the same time, brush can improve carbon sequestration and enhance agritourism and real estate value. The accurate identification of brush species and their distribution over large land tracts are important in developing brush management plans which may include herbicide application decisions. Near-real-time imaging and analyses of brush using an Unoccupied Aerial System (UAS) is a powerful tool to achieve such tasks. The use of multispectral imagery collected by a UAS to estimate the efficacy of herbicide treatment on noxious brush has not been evaluated previously. There has been no previous comparison of band combinations and pixel- and object-based methods to determine the best methodology for discrimination and classification of noxious brush species with Random Forest (RF) classification. In this study, two rangelands in southern Texas with encroachment of huisache (Vachellia farnesianna [L.] Wight & Arn.) and honey mesquite (Prosopis glandulosa Torr. var. glandulosa) were studied. Two study sites were flown with an eBee X fixed-wing to collect UAS images with four bands (Green, Red, Red-Edge, and Near-infrared) and ground truth data points pre- and post-herbicide application to study the herbicide effect on brush. Post-herbicide data were collected one year after herbicide application. Pixel-based and object-based RF classifications were used to identify brush in orthomosaic images generated from UAS images. The classification had an overall accuracy in the range 83–96%, and object-based classification had better results than pixel-based classification since object-based classification had the highest overall accuracy in both sites at 96%. The UAS image was useful for assessing herbicide efficacy by calculating canopy change after herbicide treatment. Different effects of herbicides and application rates on brush defoliation were measured by comparing canopy change in herbicide treatment zones. UAS-derived multispectral imagery can be used to identify brush species in rangelands and aid in objectively assessing the herbicide effect on brush encroachment
Performance Evaluation of Parallel Structure from Motion (SfM) Processing with Public Cloud Computing and an On-Premise Cluster System for UAS Images in Agriculture
Thanks to sensor developments, unmanned aircraft system (UAS) are the most promising modern technologies used to collect imagery datasets that can be utilized to develop agricultural applications in these days. UAS imagery datasets can grow exponentially due to the ultrafine spatial and high temporal resolution capabilities of UAS and sensors. One of the main obstacles to processing UAS data is the intensive computational resource requirements. The structure from motion (SfM) is the most popular algorithm to generate 3D point clouds, orthomosaic images, and digital elevation models (DEMs) in agricultural applications. Recently, the SfM algorithm has been implemented in parallel computing to process big UAS data faster for certain applications. This study evaluated the performance of parallel SfM processing on public cloud computing and on-premise cluster systems. The UAS datasets collected over cropping fields were used for performance evaluation. We used multiple computing nodes and centralized network storage with different network environments for the SfM workflow. In single-node processing, an instance with the most computing power in the cloud computing system performed approximately 20 and 35 percent faster than in the most powerful machine in the on-premises cluster. The parallel processing results showed that the cloud-based system performed better in speed-up and efficiency metrics for scalability, although the absolute processing time was faster in the on-premise cluster. The experimental results also showed that the public cloud computing system could be a good alternative computing environment in UAS data processing for agricultural applications
3D Characterization of Sorghum Panicles Using a 3D Point Cloud Derived from UAV Imagery
Sorghum is one of the most important crops worldwide. An accurate and efficient high-throughput phenotyping method for individual sorghum panicles is needed for assessing genetic diversity, variety selection, and yield estimation. High-resolution imagery acquired using an unmanned aerial vehicle (UAV) provides a high-density 3D point cloud with color information. In this study, we developed a detecting and characterizing method for individual sorghum panicles using a 3D point cloud derived from UAV images. The RGB color ratio was used to filter non-panicle points out and select potential panicle points. Individual sorghum panicles were detected using the concept of tree identification. Panicle length and width were determined from potential panicle points. We proposed cylinder fitting and disk stacking to estimate individual panicle volumes, which are directly related to yield. The results showed that the correlation coefficient of the average panicle length and width between the UAV-based and ground measurements were 0.61 and 0.83, respectively. The UAV-derived panicle length and diameter were more highly correlated with the panicle weight than ground measurements. The cylinder fitting and disk stacking yielded R2 values of 0.77 and 0.67 with the actual panicle weight, respectively. The experimental results showed that the 3D point cloud derived from UAV imagery can provide reliable and consistent individual sorghum panicle parameters, which were highly correlated with ground measurements of panicle weight
Recommended from our members
Comparison of canopy shape and vegetation indices of citrus trees derived from UAV multispectral images for characterization of citrus greening disease
Citrus greening is a severe disease significantly affecting citrus production in the United States because the disease is not curable with currently available technologies. For this reason, monitoring citrus disease in orchards is critical to eradicate and replace infected trees before the spread of the disease. In this study, the canopy shape and vegetation indices of infected and healthy orange trees were compared to better understand their significant characteristics using unmanned aerial vehicle (UAV)-based multispectral images. Individual citrus trees were identified using thresholding and morphological filtering. The UAV-based phenotypes of each tree, such as tree height, crown diameter, and canopy volume, were calculated and evaluated with the corresponding ground measurements. The vegetation indices of infected and healthy trees were also compared to investigate their spectral differences. The results showed that correlation coefficients of tree height and crown diameter between the UAV-based and ground measurements were 0.7 and 0.8, respectively. The UAV-based canopy volume was also highly correlated with the ground measurements (R2 > 0.9). Four vegetation indices—normalized difference vegetation index (NDVI), normalized difference RedEdge index (NDRE), modified soil adjusted vegetation index (MSAVI), and chlorophyll index (CI)—were significantly higher in healthy trees than diseased trees. The RedEdge-related vegetation indices showed more capability for citrus disease monitoring. Additionally, the experimental results showed that the UAV-based flush ratio and canopy volume can be valuable indicators to differentiate trees with citrus greening disease.Citrus greening is a severe disease significantly affecting citrus production in the United States because the disease is not curable with currently available technologies. For this reason, monitoring citrus disease in orchards is critical to eradicate and replace infected trees before the spread of the disease. In this study, the canopy shape and vegetation indices of infected and healthy orange trees were compared to better understand their significant characteristics using unmanned aerial vehicle (UAV)-based multispectral images. Individual citrus trees were identified using thresholding and morphological filtering. The UAV-based phenotypes of each tree, such as tree height, crown diameter, and canopy volume, were calculated and evaluated with the corresponding ground measurements. The vegetation indices of infected and healthy trees were also compared to investigate their spectral differences. The results showed that correlation coefficients of tree height and crown diameter between the UAV-based and ground measurements were 0.7 and 0.8, respectively. The UAV-based canopy volume was also highly correlated with the ground measurements (R2 > 0.9). Four vegetation indices—normalized difference vegetation index (NDVI), normalized difference RedEdge index (NDRE), modified soil adjusted vegetation index (MSAVI), and chlorophyll index (CI)—were significantly higher in healthy trees than diseased trees. The RedEdge-related vegetation indices showed more capability for citrus disease monitoring. Additionally, the experimental results showed that the UAV-based flush ratio and canopy volume can be valuable indicators to differentiate trees with citrus greening disease
Recommended from our members
A comparative study of RGB and multispectral sensor-based cotton canopy cover modelling using multi-temporal UAS data
This study presents a comparative study of multispectral and RGB (red, green, and blue) sensor-based cotton canopy cover modelling using multi-temporal unmanned aircraft systems (UAS) imagery. Additionally, a canopy cover model using an RGB sensor is proposed that combines an RGB-based vegetation index with morphological closing. The field experiment was established in 2017 and 2018, where the whole study area was divided into approximately 1 x 1 m size grids. Grid-wise percentage canopy cover was computed using both RGB and multispectral sensors over multiple flights during the growing season of the cotton crop. Initially, the normalized difference vegetation index (NDVI)-based canopy cover was estimated, and this was used as a reference for the comparison with RGB-based canopy cover estimations. To test the maximum achievable performance of RGB-based canopy cover estimation, a pixel-wise classification method was implemented. Later, four RGB-based canopy cover estimation methods were implemented using RGB images, namely Canopeo, the excessive greenness index, the modified red green vegetation index and the red green blue vegetation index. The performance of RGB-based canopy cover estimation was evaluated using NDVI-based canopy cover estimation. The multispectral sensor-based canopy cover model was considered to be a more stable and accurately estimating canopy cover model, whereas the RGB-based canopy cover model was very unstable and failed to identify canopy when cotton leaves changed color after canopy maturation. The application of a morphological closing operation after the thresholding significantly improved the RGB-based canopy cover modeling. The red green blue vegetation index turned out to be the most efficient vegetation index to extract canopy cover with very low average root mean square error (2.94% for the 2017 dataset and 2.82% for the 2018 dataset), with respect to multispectral sensor-based canopy cover estimation. The proposed canopy cover model provides an affordable alternate of the multispectral sensors which are more sensitive and expensive.This study presents a comparative study of multispectral and RGB (red, green, and blue) sensor-based cotton canopy cover modelling using multi-temporal unmanned aircraft systems (UAS) imagery. Additionally, a canopy cover model using an RGB sensor is proposed that combines an RGB-based vegetation index with morphological closing. The field experiment was established in 2017 and 2018, where the whole study area was divided into approximately 1 x 1 m size grids. Grid-wise percentage canopy cover was computed using both RGB and multispectral sensors over multiple flights during the growing season of the cotton crop. Initially, the normalized difference vegetation index (NDVI)-based canopy cover was estimated, and this was used as a reference for the comparison with RGB-based canopy cover estimations. To test the maximum achievable performance of RGB-based canopy cover estimation, a pixel-wise classification method was implemented. Later, four RGB-based canopy cover estimation methods were implemented using RGB images, namely Canopeo, the excessive greenness index, the modified red green vegetation index and the red green blue vegetation index. The performance of RGB-based canopy cover estimation was evaluated using NDVI-based canopy cover estimation. The multispectral sensor-based canopy cover model was considered to be a more stable and accurately estimating canopy cover model, whereas the RGB-based canopy cover model was very unstable and failed to identify canopy when cotton leaves changed color after canopy maturation. The application of a morphological closing operation after the thresholding significantly improved the RGB-based canopy cover modeling. The red green blue vegetation index turned out to be the most efficient vegetation index to extract canopy cover with very low average root mean square error (2.94% for the 2017 dataset and 2.82% for the 2018 dataset), with respect to multispectral sensor-based canopy cover estimation. The proposed canopy cover model provides an affordable alternate of the multispectral sensors which are more sensitive and expensive
Recommended from our members
Automated open cotton boll detection for yield estimation using unmanned aircraft vehicle (UAV) data
Unmanned aerial vehicle (UAV) images have great potential for various agricultural applications. In particular, UAV systems facilitate timely and precise data collection in agriculture fields at high spatial and temporal resolutions. In this study, we propose an automatic open cotton boll detection algorithm using ultra-fine spatial resolution UAV images. Seed points for a region growing algorithm were generated hierarchically with a random base for computation efficiency. Cotton boll candidates were determined based on the spatial features of each region growing segment. Spectral threshold values that automatically separate cotton bolls from other non-target objects were derived based on input images for adaptive application. Finally, a binary cotton boll classification was performed using the derived threshold values and other morphological filters to reduce noise from the results. The open cotton boll classification results were validated using reference data and the results showed an accuracy higher than 88% in various evaluation measures. Moreover, the UAV-extracted cotton boll area and actual crop yield had a strong positive correlation (0.8). The proposed method leverages UAV characteristics such as high spatial resolution and accessibility by applying automatic and unsupervised procedures using images from a single date. Additionally, this study verified the extraction of target regions of interest from UAV images for direct yield estimation. Cotton yield estimation models had R2 values between 0.63 and 0.65 and RMSE values between 0.47 kg and 0.66 kg per plot grid.Unmanned aerial vehicle (UAV) images have great potential for various agricultural applications. In particular, UAV systems facilitate timely and precise data collection in agriculture fields at high spatial and temporal resolutions. In this study, we propose an automatic open cotton boll detection algorithm using ultra-fine spatial resolution UAV images. Seed points for a region growing algorithm were generated hierarchically with a random base for computation efficiency. Cotton boll candidates were determined based on the spatial features of each region growing segment. Spectral threshold values that automatically separate cotton bolls from other non-target objects were derived based on input images for adaptive application. Finally, a binary cotton boll classification was performed using the derived threshold values and other morphological filters to reduce noise from the results. The open cotton boll classification results were validated using reference data and the results showed an accuracy higher than 88% in various evaluation measures. Moreover, the UAV-extracted cotton boll area and actual crop yield had a strong positive correlation (0.8). The proposed method leverages UAV characteristics such as high spatial resolution and accessibility by applying automatic and unsupervised procedures using images from a single date. Additionally, this study verified the extraction of target regions of interest from UAV images for direct yield estimation. Cotton yield estimation models had R2 values between 0.63 and 0.65 and RMSE values between 0.47 kg and 0.66 kg per plot grid
Automatic Extraction of Optimal Endmembers from Airborne Hyperspectral Imagery Using Iterative Error Analysis (IEA) and Spectral Discrimination Measurements
Pure surface materials denoted by endmembers play an important role in hyperspectral processing in various fields. Many endmember extraction algorithms (EEAs) have been proposed to find appropriate endmember sets. Most studies involving the automatic extraction of appropriate endmembers without a priori information have focused on N-FINDR. Although there are many different versions of N-FINDR algorithms, computational complexity issues still remain and these algorithms cannot consider the case where spectrally mixed materials are extracted as final endmembers. A sequential endmember extraction-based algorithm may be more effective when the number of endmembers to be extracted is unknown. In this study, we propose a simple but accurate method to automatically determine the optimal endmembers using such a method. The proposed method consists of three steps for determining the proper number of endmembers and for removing endmembers that are repeated or contain mixed signatures using the Root Mean Square Error (RMSE) images obtained from Iterative Error Analysis (IEA) and spectral discrimination measurements. A synthetic hyperpsectral image and two different airborne images such as Airborne Imaging Spectrometer for Application (AISA) and Compact Airborne Spectrographic Imager (CASI) data were tested using the proposed method, and our experimental results indicate that the final endmember set contained all of the distinct signatures without redundant endmembers and errors from mixed materials
A Comparative Study of RGB and Multispectral Sensor-Based Cotton Canopy Cover Modelling Using Multi-Temporal UAS Data
This study presents a comparative study of multispectral and RGB (red, green, and blue) sensor-based cotton canopy cover modelling using multi-temporal unmanned aircraft systems (UAS) imagery. Additionally, a canopy cover model using an RGB sensor is proposed that combines an RGB-based vegetation index with morphological closing. The field experiment was established in 2017 and 2018, where the whole study area was divided into approximately 1 x 1 m size grids. Grid-wise percentage canopy cover was computed using both RGB and multispectral sensors over multiple flights during the growing season of the cotton crop. Initially, the normalized difference vegetation index (NDVI)-based canopy cover was estimated, and this was used as a reference for the comparison with RGB-based canopy cover estimations. To test the maximum achievable performance of RGB-based canopy cover estimation, a pixel-wise classification method was implemented. Later, four RGB-based canopy cover estimation methods were implemented using RGB images, namely Canopeo, the excessive greenness index, the modified red green vegetation index and the red green blue vegetation index. The performance of RGB-based canopy cover estimation was evaluated using NDVI-based canopy cover estimation. The multispectral sensor-based canopy cover model was considered to be a more stable and accurately estimating canopy cover model, whereas the RGB-based canopy cover model was very unstable and failed to identify canopy when cotton leaves changed color after canopy maturation. The application of a morphological closing operation after the thresholding significantly improved the RGB-based canopy cover modeling. The red green blue vegetation index turned out to be the most efficient vegetation index to extract canopy cover with very low average root mean square error (2.94% for the 2017 dataset and 2.82% for the 2018 dataset), with respect to multispectral sensor-based canopy cover estimation. The proposed canopy cover model provides an affordable alternate of the multispectral sensors which are more sensitive and expensive
Recommended from our members
Performance Evaluation of Parallel Structure from Motion (SfM) Processing with Public Cloud Computing and an On-Premise Cluster System for UAS Images in Agriculture
Thanks to sensor developments, unmanned aircraft systems (UASs) are now among the most promising modern technologies used to collect imagery datasets that can be utilized to develop agricultural applications. These datasets can grow exponentially due to the ultrafine spatial and high temporal resolution capabilities of UAS data. One of the main obstacles to processing UAS data is the intensive computational resource requirements. The structure from motion (SfM) is the most popular algorithm used to generate 3D point clouds, orthomosaic images and digital elevation models (DEMs) in agricultural applications. Recently, the SfM algorithm has been implemented in parallel to process big UAS data quicker for certain applications. This study evalu-ated the performance of parallel SfM processing on public cloud computing and on-premise cluster systems. The UAS datasets collected over cropping fields were used for evaluation. We used multiple computing nodes and centralized network storage with different network environments for the SfM workflow. In single-node processing, an instance with the most computing power in the cloud computing system performed approximately 20 and 35 percent faster than in the single-node processing with the most computing power in the on-premises cluster. The parallel processing results showed that the cloud-based system performed better in scalability in terms of speed-up and efficiency metrics, although the absolute processing time was faster in the on-premise cluster. The experimental results also showed that the public cloud computing system might be a good alternative computing environment in UAS data processing for agricultural applications.Thanks to sensor developments, unmanned aircraft systems (UASs) are now among the most promising modern technologies used to collect imagery datasets that can be utilized to develop agricultural applications. These datasets can grow exponentially due to the ultrafine spatial and high temporal resolution capabilities of UAS data. One of the main obstacles to processing UAS data is the intensive computational resource requirements. The structure from motion (SfM) is the most popular algorithm used to generate 3D point clouds, orthomosaic images and digital elevation models (DEMs) in agricultural applications. Recently, the SfM algorithm has been implemented in parallel to process big UAS data quicker for certain applications. This study evalu-ated the performance of parallel SfM processing on public cloud computing and on-premise cluster systems. The UAS datasets collected over cropping fields were used for evaluation. We used multiple computing nodes and centralized network storage with different network environments for the SfM workflow. In single-node processing, an instance with the most computing power in the cloud computing system performed approximately 20 and 35 percent faster than in the single-node processing with the most computing power in the on-premises cluster. The parallel processing results showed that the cloud-based system performed better in scalability in terms of speed-up and efficiency metrics, although the absolute processing time was faster in the on-premise cluster. The experimental results also showed that the public cloud computing system might be a good alternative computing environment in UAS data processing for agricultural applications