11 research outputs found

    Data-Driven Artificial Intelligence for Calibration of Hyperspectral Big Data

    Get PDF
    Near-earth hyperspectral big data present both huge opportunities and challenges for spurring developments in agriculture and high-throughput plant phenotyping and breeding. In this article, we present data-driven approaches to address the calibration challenges for utilizing near-earth hyperspectral data for agriculture. A data-driven, fully automated calibration workflow that includes a suite of robust algorithms for radiometric calibration, bidirectional reflectance distribution function (BRDF) correction and reflectance normalization, soil and shadow masking, and image quality assessments was developed. An empirical method that utilizes predetermined models between camera photon counts (digital numbers) and downwelling irradiance measurements for each spectral band was established to perform radiometric calibration. A kernel-driven semiempirical BRDF correction method based on the Ross Thick-Li Sparse (RTLS) model was used to normalize the data for both changes in solar elevation and sensor view angle differences attributed to pixel location within the field of view. Following rigorous radiometric and BRDF corrections, novel rule-based methods were developed to conduct automatic soil removal; and a newly proposed approach was used for image quality assessment; additionally, shadow masking and plot-level feature extraction were carried out. Our results show that the automated calibration, processing, storage, and analysis pipeline developed in this work can effectively handle massive amounts of hyperspectral data and address the urgent challenges related to the production of sustainable bioenergy and food crops, targeting methods to accelerate plant breeding for improving yield and biomass traits

    Urban tree species classification using UAV-based multi-sensor data fusion and machine learning

    No full text
    Urban tree species classification is a challenging task due to spectral and spatial diversity within an urban environment. Unmanned aerial vehicle (UAV) platforms and small-sensor technology are rapidly evolving, presenting the opportunity for a comprehensive multi-sensor remote sensing approach for urban tree classification. The objectives of this paper were to develop a multi-sensor data fusion technique for urban tree species classification with limited training samples. To that end, UAV-based multispectral, hyperspectral, LiDAR, and thermal infrared imagery was collected over an urban study area to test the classification of 96 individual trees from seven species using a data fusion approach. Two supervised machine learning classifiers, Random Forest (RF) and Support Vector Machine (SVM), were investigated for their capacity to incorporate highly dimensional and diverse datasets from multiple sensors. When using hyperspectral-derived spectral features with RF, the fusion of all features extracted from all sensor types (spectral, LiDAR, thermal) achieved the highest overall classification accuracy (OA) of 83.3% and kappa of 0.80. Despite multispectral reflectance bands alone producing significantly lower OA of 55.2% compared to 70.2% with minimum noise fraction (MNF) transformed hyperspectral reflectance bands, the full dataset combination (spectral, LiDAR, thermal) with multispectral-derived spectral features achieved an OA of 81.3% and kappa of 0.77 using RF. Comparison of the features extracted from individual sensors for each species highlight the ability for each sensor to identify distinguishable characteristics between species to aid classification. The results demonstrate the potential for a high-resolution multi-sensor data fusion approach for classifying individual trees by species in a complex urban environment under limited sampling requirements

    Urban Tree Species Classification Using a WorldView-2/3 and LiDAR Data Fusion Approach and Deep Learning

    No full text
    Urban areas feature complex and heterogeneous land covers which create challenging issues for tree species classification. The increased availability of high spatial resolution multispectral satellite imagery and LiDAR datasets combined with the recent evolution of deep learning within remote sensing for object detection and scene classification, provide promising opportunities to map individual tree species with greater accuracy and resolution. However, there are knowledge gaps that are related to the contribution of Worldview-3 SWIR bands, very high resolution PAN band and LiDAR data in detailed tree species mapping. Additionally, contemporary deep learning methods are hampered by lack of training samples and difficulties of preparing training data. The objective of this study was to examine the potential of a novel deep learning method, Dense Convolutional Network (DenseNet), to identify dominant individual tree species in a complex urban environment within a fused image of WorldView-2 VNIR, Worldview-3 SWIR and LiDAR datasets. DenseNet results were compared against two popular machine classifiers in remote sensing image analysis, Random Forest (RF) and Support Vector Machine (SVM). Our results demonstrated that: (1) utilizing a data fusion approach beginning with VNIR and adding SWIR, LiDAR, and panchromatic (PAN) bands increased the overall accuracy of the DenseNet classifier from 75.9% to 76.8%, 81.1% and 82.6%, respectively. (2) DenseNet significantly outperformed RF and SVM for the classification of eight dominant tree species with an overall accuracy of 82.6%, compared to 51.8% and 52% for SVM and RF classifiers, respectively. (3) DenseNet maintained superior performance over RF and SVM classifiers under restricted training sample quantities which is a major limiting factor for deep learning techniques. Overall, the study reveals that DenseNet is more effective for urban tree species classification as it outperforms the popular RF and SVM techniques when working with highly complex image scenes regardless of training sample size

    Unmanned Aerial System (UAS)-Based Phenotyping of Soybean using Multi-sensor Data Fusion and Extreme Learning Machine

    No full text
    Estimating crop biophysical and biochemical parameters with high accuracy at low-cost is imperative for high-throughput phenotyping in precision agriculture. Although fusion of data from multiple sensors is a common application in remote sensing, less is known on the contribution of low-cost RGB, multispectral and thermal sensors to rapid crop phenotyping. This is due to the fact that (1) simultaneous collection of multi-sensor data using satellites are rare and (2) multi-sensor data collected during a single flight have not been accessible until recent developments in Unmanned Aerial Systems (UASs) and UAS-friendly sensors that allow efficient information fusion. The objective of this study was to evaluate the power of high spatial resolution RGB, multispectral and thermal data fusion to estimate soybean (Glycine max) biochemical parameters including chlorophyll content and nitrogen concentration, and biophysical parameters including Leaf Area Index (LAI), above ground fresh and dry biomass. Multiple low-cost sensors integrated on UASs were used to collect RGB, multispectral, and thermal images throughout the growing season at a site established near Columbia, Missouri, USA. From these images, vegetation indices were extracted, a Crop Surface Model (CSM) was advanced, and a model to extract the vegetation fraction was developed. Then, spectral indices/features were combined to model and predict crop biophysical and biochemical parameters using Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), and Extreme Learning Machine based Regression (ELR) techniques. Results showed that: (1) For biochemical variable estimation, multispectral and thermal data fusion provided the best estimate for nitrogen concentration and chlorophyll (Chl) a content (RMSE of 9.9% and 17.1%, respectively) and RGB color information based indices and multispectral data fusion exhibited the largest RMSE 22.6%; the highest accuracy for Chl a + b content estimation was obtained by fusion of information from all three sensors with an RMSE of 11.6%. (2) Among the plant biophysical variables, LAI was best predicted by RGB and thermal data fusion while multispectral and thermal data fusion was found to be best for biomass estimation. (3) For estimation of the above mentioned plant traits of soybean from multi-sensor data fusion, ELR yields promising results compared to PLSR and SVR in this study. This research indicates that fusion of low-cost multiple sensor data within a machine learning framework can provide relatively accurate estimation of plant traits and provide valuable insight for high spatial precision in agriculture and plant stress assessment

    Additional File 2:

    No full text
    Proposed example of a peer reviewed search strategy for measures of ICU capacity strain for OVID Medline (version 2, August 11, 2015). (DOCX 21 KB
    corecore