4 research outputs found

    DataSheet_1_Spectral Vegetation Indices to Track Senescence Dynamics in Diverse Wheat Germplasm.docx

    No full text
    The ability of a genotype to stay green affects the primary target traits grain yield (GY) and grain protein concentration (GPC) in wheat. High throughput methods to assess senescence dynamics in large field trials will allow for (i) indirect selection in early breeding generations, when yield cannot yet be accurately determined and (ii) mapping of the genomic regions controlling the trait. The aim of this study was to develop a robust method to assess senescence based on hyperspectral canopy reflectance. Measurements were taken in three years throughout the grain filling phase on >300 winter wheat varieties in the spectral range from 350 to 2500 nm using a spectroradiometer. We compared the potential of spectral indices (SI) and full-spectrum models to infer visually observed senescence dynamics from repeated reflectance measurements. Parameters describing the dynamics of senescence were used to predict GY and GPC and a feature selection algorithm was used to identify the most predictive features. The three-band plant senescence reflectance index (PSRI) approximated the visually observed senescence dynamics best, whereas full-spectrum models suffered from a strong year-specificity. Feature selection identified visual scorings as most predictive for GY, but also PSRI ranked among the most predictive features while adding additional spectral features had little effect. Visually scored delayed senescence was positively correlated with GY ranging from r = 0.173 in 2018 to r = 0.365 in 2016. It appears that visual scoring remains the gold standard to quantify leaf senescence in moderately large trials. However, using appropriate phenotyping platforms, the proposed index-based parameterization of the canopy reflectance dynamics offers the critical advantage of upscaling to very large breeding trials.</p

    Data_Sheet_1_Flower Mapping in Grasslands With Drones and Deep Learning.ZIP

    No full text
    Manual assessment of flower abundance of different flowering plant species in grasslands is a time-consuming process. We present an automated approach to determine the flower abundance in grasslands from drone-based aerial images by using deep learning (Faster R-CNN) object detection approach, which was trained and evaluated on data from five flights at two sites. Our deep learning network was able to identify and classify individual flowers. The novel method allowed generating spatially explicit maps of flower abundance that met or exceeded the accuracy of the manual-count-data extrapolation method while being less labor intensive. The results were very good for some types of flowers, with precision and recall being close to or higher than 90%. Other flowers were detected poorly due to reasons such as lack of enough training data, appearance changes due to phenology, or flowers being too small to be reliably distinguishable on the aerial images. The method was able to give precise estimates of the abundance of many flowering plant species. In the future, the collection of more training data will allow better predictions for the flowers that are not well predicted yet. The developed pipeline can be applied to any sort of aerial object detection problem.</p

    Presentation_1_Outdoor Plant Segmentation With Deep Learning for High-Throughput Field Phenotyping on a Diverse Wheat Dataset.pdf

    No full text
    Robust and automated segmentation of leaves and other backgrounds is a core prerequisite of most approaches in high-throughput field phenotyping. So far, the possibilities of deep learning approaches for this purpose have not been explored adequately, partly due to a lack of publicly available, appropriate datasets. This study presents a workflow based on DeepLab v3+ and on a diverse annotated dataset of 190 RGB (350 x 350 pixels) images. Images of winter wheat plants of 76 different genotypes and developmental stages have been acquired throughout multiple years at high resolution in outdoor conditions using nadir view, encompassing a wide range of imaging conditions. Inconsistencies of human annotators in complex images have been quantified, and metadata information of camera settings has been included. The proposed approach achieves an intersection over union (IoU) of 0.77 and 0.90 for plants and soil, respectively. This outperforms the benchmarked machine learning methods which use Support Vector Classifier and/or Random Forrest. The results show that a small but carefully chosen and annotated set of images can provide a good basis for a powerful segmentation pipeline. Compared to earlier methods based on machine learning, the proposed method achieves better performance on the selected dataset in spite of using a deep learning approach with limited data. Increasing the amount of publicly available data with high human agreement on annotations and further development of deep neural network architectures will provide high potential for robust field-based plant segmentation in the near future. This, in turn, will be a cornerstone of data-driven improvement in crop breeding and agricultural practices of global benefit.</p

    DataSheet_1_Assessment of Multi-Image Unmanned Aerial Vehicle Based High-Throughput Field Phenotyping of Canopy Temperature.pdf

    No full text
    Canopy temperature (CT) has been related to water-use and yield formation in crops. However, constantly (e.g., sun illumination angle, ambient temperature) as well as rapidly (e.g., clouds) changing environmental conditions make it difficult to compare measurements taken even at short time intervals. This poses a great challenge for high-throughput field phenotyping (HTFP). The aim of this study was to i) set up a workflow for unmanned aerial vehicles (UAV) based HTFP of CT, ii) investigate different data processing procedures to combine information from multiple images into orthomosaics, iii) investigate the repeatability of the resulting CT by means of heritability, and iv) investigate the optimal timing for thermography measurements. Additionally, the approach was v) compared with other methods for HTFP of CT. The study was carried out in a winter wheat field trial with 354 genotypes planted in two replications in a temperate climate, where a UAV captured CT in a time series of 24 flights during 6 weeks of the grain-filling phase. Custom-made thermal ground control points enabled accurate georeferencing of the data. The generated thermal orthomosaics had a high spatial accuracy (mean ground sampling distance of 5.03 cm/pixel) and position accuracy [mean root-mean-square deviation (RMSE) = 4.79 cm] over all time points. An analysis on the impact of the measurement geometry revealed a gradient of apparent CT in parallel to the principle plane of the sun and a hotspot around nadir. Averaging information from all available images (and all measurement geometries) for an area of interest provided the best results by means of heritability. Correcting for spatial in-field heterogeneity as well as slight environmental changes during the measurements were performed with the R package SpATS. CT heritability ranged from 0.36 to 0.74. Highest heritability values were found in the early afternoon. Since senescence was found to influence the results, it is recommended to measure CT in wheat after flowering and before the onset of senescence. Overall, low-altitude and high-resolution remote sensing proved suitable to assess the CT of crop genotypes in a large number of small field plots as is required in crop breeding and variety testing experiments.</p
    corecore