1 research outputs found

    On the Use of Imaging Spectroscopy from Unmanned Aerial Systems (UAS) to Model Yield and Assess Growth Stages of a Broadacre Crop

    Get PDF
    Snap bean production was valued at $363 million in 2018. Moreover, the increasing need in food production, caused by the exponential increase in population, makes this crop vitally important to study. Traditionally, harvest time determination and yield prediction are performed by collecting limited number of samples. While this approach could work, it is inaccurate, labor-intensive, and based on a small sample size. The ambiguous nature of this approach furthermore leaves the grower with under-ripe and over-mature plants, decreasing the final net profit and the overall quality of the product. A more cost-effective method would be a site-specific approach that would save time and labor for farmers and growers, while providing them with exact detail to when and where to harvest and how much is to be harvested (while forecasting yield). In this study we used hyperspectral (i.e., point-based and image-based), as well as biophysical data, to identify spectral signatures and biophysical attributes that could schedule harvest and forecast yield prior to harvest. Over the past two decades, there have been immense advances in the field of yield and harvest modeling using remote sensing data. Nevertheless, there still exists a wide gap in the literature covering yield and harvest assessment as a function of time using both ground-based and unmanned aerial systems. There is a need for a study focusing on crop-specific yield and harvest assessment using a rapid, affordable system. We hypothesize that a down-sampled multispectral system, tuned with spectral features identified from hyperspectral data, could address the mentioned gaps. Moreover, we hypothesize that the airborne data will contain noise that could negatively impact the performance and the reliability of the utilized models. Thus, We address these knowledge gaps with three objectives as below: 1. Assess yield prediction of snap bean crop using spectral and biophysical data and identify discriminating spectral features via statistical and machine learning approaches. 2. Evaluate snap bean harvest maturity at both the plant growth stage and pod maturity level, by means of spectral and biophysical indicators, and identify the corresponding discriminating spectral features. 3. Assess the feasibility of using a deep learning architecture for reducing noise in the hyperspectral data. In the light of the mentioned objectives, we carried out a greenhouse study in the winter and spring of 2019, where we studied temporal change in spectra and physical attributes of snap-bean crop, from Huntington cultivar, using a handheld spectrometer in the visible- to shortwave-infrared domain (400-2500 nm). Chapter 3 of this dissertation focuses on yield assessment of the greenhouse study. Findings from this best-case scenario yield study showed that the best time to study yield is approximately 20-25 days prior to harvest that would give out the most accurate yield predictions. The proposed approach was able to explain variability as high as R2 = 0.72, with spectral features residing in absorption regions for chlorophyll, protein, lignin, and nitrogen, among others. The captured data from this study contained minimal noise, even in the detector fall-off regions. Moving the focus to harvest maturity assessment, Chapter 4 presents findings from this objective in the greenhouse environment. Our findings showed that four stages of maturity, namely vegetative growth, budding, flowering, and pod formation, are distinguishable with 79% and 78% accuracy, respectively, via the two introduced vegetation indices, as snap-bean growth index (SGI) and normalized difference snap-bean growth index (NDSI), respectively. Moreover, pod-level maturity classification showed that ready-to-harvest and not-ready-to-harvest pods can be separated with 78% accuracy with identified wavelengths residing in green, red edge, and shortwave-infrared regions. Moreover, Chapters 5 and 6 focus on transitioning the learned concepts from the mentioned greenhouse scenario to UAS domain. We transitioned from a handheld spectrometer in the visible to short-wave infrared domain (400-2500 nm) to a UAS-mounted hyperspectral imager in the visible-to-near-infrared region (400-1000 nm). Two years worth of data, at two different geographical locations, were collected in upstate New York and examined for yield modeling and harvest scheduling objectives. For analysis of the collected data, we introduced a feature selection library in Python, named “Jostar”, to identify the most discriminating wavelengths. The findings from the yield modeling UAS study show that pod weight and seed length, as two different yield indicators, can be explained with R2 as high as 0.93 and 0.98, respectively. Identified wavelengths resided in blue, green, red, and red edge regions, and 44-55 days after planting (DAP) showed to be the optimal time for yield assessment. Chapter 6, on the other hand, evaluates maturity assessment, in terms of pod classification, from the UAS perspective. Results from this study showed that the identified features resided in blue, green, red, and red-edge regions, contributing to F1 score as high as 0.91 for differentiating between ready-to-harvest vs. not ready-to-harvest. The identified features from this study is in line with those detected from the UAS yield assessment study. In order to have a parallel comparison of the greenhouse study against the UAS study, we adopted the methodology employed for UAS studies and applied it to the greenhouse studies, in Chapter 7. Since the greenhouse data were captured in the visible-to-shortwave-infrared (400-2500 nm) domain, and the UAS study data were captured in the VNIR (400-1000 nm) domain, we truncated the spectral range of the collected data from the greenhouse study to the VNIR domain. The comparison experiment between the greenhouse study and the UAS studies for yield assessment, at two harvest stages early and late, showed that spectral features in 450-470, 500-520, 650, 700-730 nm regions were repeated on days with highest coefficient of determination. Moreover, 46-48 DAP with high coefficient of determination for yield prediction were repeated in five out of six data sets (two early stages, each three data sets). On the other hand, the harvest maturity comparison between the greenhouse study and the UAS data sets showed that similar identified wavelengths reside in ∼450, ∼530, ∼715, and ∼760 nm regions, with performance metric (F1 score) of 0.78, 0.84, and 0.9 for greenhouse, 2019 UAS, and 2020 UAS data, respectively. However, the incorporated noise in the captured data from the UAS study, along with the high computational cost of the classical mathematical approach employed for denoising hyperspectral data, have inspired us to leverage the computational performance of hyperspectral denoising by assessing the feasibility of transferring the learned concepts to deep learning models. In Chapter 8, we approached hyperspectral denoising in spectral domain (1D fashion) for two types of noise, integrated noise and non-independent and non-identically distributed (non-i.i.d.) noise. We utilized Memory Networks due to their power in image denoising for hyperspectral denoising, introduced a new loss and benchmarked it against several data sets and models. The proposed model, HypeMemNet, ranked first - up to 40% in terms of signal-to-noise ratio (SNR) for resolving integrated noise, and first or second, by a small margin for resolving non-i.i.d. noise. Our findings showed that a proper receptive field and a suitable number of filters are crucial for denoising integrated noise, while parameter size was shown to be of the highest importance for non-i.i.d. noise. Results from the conducted studies provide a comprehensive understanding encompassing yield modeling, harvest scheduling, and hyperspectral denoising. Our findings bode well for transitioning from an expensive hyperspectral imager to a multispectral imager, tuned with the identified bands, as well as employing a rapid deep learning model for hyperspectral denoising
    corecore