23 research outputs found

    Principal variable selection to explain grain yield variation in winter wheat from features extracted from UAV imagery

    Get PDF
    Background: Automated phenotyping technologies are continually advancing the breeding process. However, collecting various secondary traits throughout the growing season and processing massive amounts of data still take great efforts and time. Selecting a minimum number of secondary traits that have the maximum predictive power has the potential to reduce phenotyping efforts. The objective of this study was to select principal features extracted from UAV imagery and critical growth stages that contributed the most in explaining winter wheat grain yield. Five dates of multispectral images and seven dates of RGB images were collected by a UAV system during the spring growing season in 2018. Two classes of features (variables), totaling to 172 variables, were extracted for each plot from the vegetation index and plant height maps, including pixel statistics and dynamic growth rates. A parametric algorithm, LASSO regression (the least angle and shrinkage selection operator), and a non-parametric algorithm, random forest, were applied for variable selection. The regression coefficients estimated by LASSO and the permutation importance scores provided by random forest were used to determine the ten most important variables influencing grain yield from each algorithm. Results: Both selection algorithms assigned the highest importance score to the variables related with plant height around the grain filling stage. Some vegetation indices related variables were also selected by the algorithms mainly at earlier to mid growth stages and during the senescence. Compared with the yield prediction using all 172 variables derived from measured phenotypes, using the selected variables performed comparable or even better. We also noticed that the prediction accuracy on the adapted NE lines (r = 0.58–0.81) was higher than the other lines (r = 0.21–0.59) included in this study with different genetic backgrounds. Conclusions: With the ultra-high resolution plot imagery obtained by the UAS-based phenotyping we are now able to derive more features, such as the variation of plant height or vegetation indices within a plot other than just an averaged number, that are potentially very useful for the breeding purpose. However, too many features or variables can be derived in this way. The promising results from this study suggests that the selected set from those variables can have comparable prediction accuracies on the grain yield prediction than the full set of them but possibly resulting in a better allocation of efforts and resources on phenotypic data collection and processing

    Yield prediction by machine learning from UAS‑based mulit‑sensor data fusion in soybean

    Get PDF
    16 p.Nowadays, automated phenotyping of plants is essential for precise and cost-effective improvement in the efficiency of crop genetics. In recent years, machine learning (ML) techniques have shown great success in the classification and modelling of crop parameters. In this research, we consider the capability of ML to perform grain yield prediction in soybeans by combining data from different optical sensors via RF (Random Forest) and XGBoost (eXtreme Gradient Boosting). During the 2018 growing season, a panel of 382 soybean recombinant inbred lines were evaluated in a yield trial at the Agronomy Center for Research and Education (ACRE) in West Lafayette (Indiana, USA). Images were acquired by the Parrot Sequoia Multispectral Sensor and the S.O.D.A. compact digital camera on board a senseFly eBee UAS (Unnamed Aircraft System) solution at R4 and early R5 growth stages. Next, a standard photogrammetric pipeline was carried out by SfM (Structure from Motion). Multispectral imagery serves to analyse the spectral response of the soybean end-member in 2D. In addition, RGB images were used to reconstruct the study area in 3D, evaluating the physiological growth dynamics per plot via height variations and crop volume estimations. As ground truth, destructive grain yield measurements were taken at the end of the growing season.SI"Development of Analytical Tools for Drone-based Canopy Phenotyping in Crop Breeding" (American Institute of Food and Agriculture

    Principal variable selection to explain grain yield variation in winter wheat from features extracted from UAV imagery

    Get PDF
    Background: Automated phenotyping technologies are continually advancing the breeding process. However, collecting various secondary traits throughout the growing season and processing massive amounts of data still take great efforts and time. Selecting a minimum number of secondary traits that have the maximum predictive power has the potential to reduce phenotyping efforts. The objective of this study was to select principal features extracted from UAV imagery and critical growth stages that contributed the most in explaining winter wheat grain yield. Five dates of multispectral images and seven dates of RGB images were collected by a UAV system during the spring growing season in 2018. Two classes of features (variables), totaling to 172 variables, were extracted for each plot from the vegetation index and plant height maps, including pixel statistics and dynamic growth rates. A parametric algorithm, LASSO regression (the least angle and shrinkage selection operator), and a non-parametric algorithm, random forest, were applied for variable selection. The regression coefficients estimated by LASSO and the permutation importance scores provided by random forest were used to determine the ten most important variables influencing grain yield from each algorithm. Results: Both selection algorithms assigned the highest importance score to the variables related with plant height around the grain filling stage. Some vegetation indices related variables were also selected by the algorithms mainly at earlier to mid growth stages and during the senescence. Compared with the yield prediction using all 172 variables derived from measured phenotypes, using the selected variables performed comparable or even better. We also noticed that the prediction accuracy on the adapted NE lines (r = 0.58–0.81) was higher than the other lines (r = 0.21–0.59) included in this study with different genetic backgrounds. Conclusions: With the ultra-high resolution plot imagery obtained by the UAS-based phenotyping we are now able to derive more features, such as the variation of plant height or vegetation indices within a plot other than just an averaged number, that are potentially very useful for the breeding purpose. However, too many features or variables can be derived in this way. The promising results from this study suggests that the selected set from those variables can have comparable prediction accuracies on the grain yield prediction than the full set of them but possibly resulting in a better allocation of efforts and resources on phenotypic data collection and processing

    Uumanned Aerial Vehicle Data Analysis For High-throughput Plant Phenotyping

    Get PDF
    The continuing population is placing unprecedented demands on worldwide crop yield production and quality. Improving genomic selection for breeding process is one essential aspect for solving this dilemma. Benefitted from the advances in high-throughput genotyping, researchers already gained better understanding of genetic traits. However, given the comparatively lower efficiency in current phenotyping technique, the significance of phenotypic traits has still not fully exploited in genomic selection. Therefore, improving HTPP efficiency has become an urgent task for researchers. As one of the platforms utilized for collecting HTPP data, unmanned aerial vehicle (UAV) allows high quality data to be collected within short time and by less labor. There are currently many options for customized UAV system on market; however, data analysis efficiency is still one limitation for the fully implementation of HTPP. To this end, the focus of this program was data analysis of UAV acquired data. The specific objectives were two-fold, one was to investigate statistical correlations between UAV derived phenotypic traits and manually measured sorghum biomass, nitrogen and chlorophyll content. Another was to conduct variable selection on the phenotypic parameters calculated from UAV derived vegetation index (VI) and plant height maps, aiming to find out the principal parameters that contribute most in explaining winter wheat grain yield. Corresponding, two studies were carried out. Good correlations between UAV-derived VI/plant height and sorghum biomass/nitrogen/chlorophyll in the first study suggested that UAV-based HTPP has great potential in facilitating genetic improvement. For the second study, variable selection results from the single-year data showed that plant height related parameters, especially from later season, contributed more in explaining grain yield. Advisor: Yeyin Sh

    Above-ground biomass estimation of arable crops using UAV-based SfM photogrammetry

    Get PDF
    This is an Accepted Manuscript of an article published by Taylor & Francis in Geocarto International on 3 dec 2018, available online: http://www.tandfonline.com/10.1080/10106049.2018.1552322Methods of estimating the total amount of above-ground biomass (AGB) in crop fields are generally based on labourious, random, and destructive in situ sampling. This study proposes a methodology for estimating herbaceous crop biomass using conventional optical cameras and structure from motion (SfM) photogrammetry. The proposed method is based on the determination of volumes according to the difference between a digital terrain model (DTM) and digital surface model (DSM) of vegetative cover. A density factor was calibrated based on a subset of destructive random samples to relate the volume and biomass and efficiently quantify the total AGB. In all cases, RMSE Z values less than 0.23 m were obtained for the DTMDSM coupling. Biomass field data confirmed the goodness of fit of the yieldbiomass estimation (R2=0,88 and 1,12 kg/ha) mainly in plots with uniform vegetation coverage. Furthermore, the method was demonstrated to be scalable to multiple platform types and sensorsThis work was supported by the life project “Operation CO2: Integrated Agroforestry Practices and Nature Conservation Against Climate Change - LIFE+ 11 ENV/ES/535” and by Xunta de Galicia under the grant “Financial aid for the consolidation and structure of competitive units of investigation in the universities of the University Galician System (2016-18)” Ref. ED431B 2016/030 and Ref. ED341D R2016/023.S

    Quantifying soybean phenotypes using UAV imagery and machine learning, deep learning methods

    Get PDF
    Crop breeding programs aim to introduce new cultivars to the world with improved traits to solve the food crisis. Food production should need to be twice of current growth rate to feed the increasing number of people by 2050. Soybean is one the major grain in the world and only US contributes around 35 percent of world soybean production. To increase soybean production, breeders still rely on conventional breeding strategy, which is mainly a 'trial and error' process. These constraints limit the expected progress of the crop breeding program. The goal was to quantify the soybean phenotypes of plant lodging and pubescence color using UAV-based imagery and advanced machine learning. Plant lodging and soybean pubescence color are two of the most important phenotypes for soybean breeding programs. Soybean lodging and pubescence color is conventionally evaluated visually by breeders, which is time-consuming and subjective to human errors. The goal of this study was to investigate the potential of unmanned aerial vehicle (UAV)-based imagery and machine learning in the assessment of lodging conditions and deep learning in the assessment pubescence color of soybean breeding lines. A UAV imaging system equipped with an RGB (red-green-blue) camera was used to collect the imagery data of 1,266 four-row plots in a soybean breeding field at the reproductive stage. Soybean lodging scores and pubescence scores were visually assessed by experienced breeders. Lodging scores were grouped into four classes, i.e., non-lodging, moderate lodging, high lodging, and severe lodging. In contrast, pubescence color scores were grouped into three classes, i.e., gray, tawny, and segregation. UAV images were stitched to build orthomosaics, and soybean plots were segmented using a grid method. Twelve image features were extracted from the collected images to assess the lodging scores of each breeding line. Four models, i.e., extreme gradient boosting (XGBoost), random forest (RF), K-nearest neighbor (KNN), and artificial neural network (ANN), were evaluated to classify soybean lodging classes. Five data pre-processing methods were used to treat the imbalanced dataset to improve the classification accuracy. Results indicate that the pre-processing method SMOTE-ENN consistently performs well for all four (XGBoost, RF, KNN, and ANN) classifiers, achieving the highest overall accuracy (OA), lowest misclassification, higher F1-score, and higher Kappa coefficient. This suggests that Synthetic Minority Over-sampling-Edited Nearest Neighbor (SMOTE-ENN) may be an excellent pre-processing method for using unbalanced datasets and classification tasks. Furthermore, an overall accuracy of 96 percent was obtained using the SMOTE-ENN dataset and ANN classifier. On the other hand, to classify the soybean pubescence color, seven pre-trained deep learning models, i.e., DenseNet121, DenseNet169, DenseNet201, ResNet50, InceptionResNet-V2, Inception-V3, and EfficientNet were used, and images of each plot were fed into the model. Data was enhanced using two rotational and two scaling factors to increase the datasets. Among the seven pre-trained deep learning models, ResNet50 and DenseNet121 classifiers showed a higher overall accuracy of 88 percent, along with higher precision, recall, and F1-score for all three classes of pubescence color. In conclusion, the developed UAV-based high-throughput phenotyping system can gather image features to estimate soybean crucial phenotypes and classify the phenotypes, which will help the breeders in phenotypic variations in breeding trials. Also, the RGB imagery-based classification could be a cost-effective choice for breeders and associated researchers for plant breeding programs in identifying superior genotypes.Includes bibliographical references

    Transferability of vegetation recovery models based on remote sensing across different fire regimes

    Get PDF
    P. 441-451Aim To evaluate the transferability between fire recurrence scenarios of post‐fire vegetation cover models calibrated with satellite imagery data at different spatial resolutions within two Mediterranean pine forest sites affected by large wildfires in 2012. Location The northwest and east of the Iberian Peninsula. Methods In each study site, we defined three fire recurrence scenarios for a reference period of 35 years. We used image texture derived from the surface reflectance channels of WorldView‐2 and Sentinel‐2 (at a spatial resolution of 2 m × 2 m and 20 m × 20 m, respectively) as predictors of post‐fire vegetation cover in Random Forest regression analysies. Percentage vegetation cover was sampled in two sets of field plots with a size roughly equivalent to the spatial resolution of the imagery. The plots were distributed following a stratified design according to fire recurrence scenarios. Model transferability was assessed within each study site by applying the vegetation cover model developed for a given fire recurrence scenario to predict vegetation cover in other scenarios, iteratively. Results For both wildfires, the highest model transferability between fire recurrence scenarios was achieved for those holding the most similar vegetation community composition regarding the balance of species abundance according to their plant‐regenerative traits (root mean square error [RMSE] around or lower than 15%). Model transferability performance was highly improved by fine‐grained remote‐sensing data. Conclusions Fire recurrence is a major driver of community structure and composition so the framework proposed in this study would allow land managers to reduce efforts in the context of post‐fire decision‐making to assess vegetation recovery within large burned landscapes with fire regime variability.S

    Les forĂȘts d'arbres dĂ©cisionnels et la rĂ©gression linĂ©aire pour Ă©tudier les effets du sous-solage et des drains agricoles sur la hauteur des plants de maĂŻs et les nappes d'eau dans un sol Ă  permĂ©abilitĂ© rĂ©duite

    Get PDF
    Les travaux de sous-solage qui amĂ©liorent le drainage interne et dĂ©compactent des horizons rendus pratiquement impermĂ©ables par la compaction profonde seraient bĂ©nĂ©fiques aux sols de faible permĂ©abilitĂ©. Le sous-solage profond exĂ©cutĂ© perpendiculairement aux drains avec un bĂ©lier (bulldozer) pourrait ĂȘtre plus efficace pour temporairement amĂ©liorer le drainage de ces sols qu’une sous-soleuse conventionnelle attelĂ©e Ă  un tracteur et opĂ©rĂ©e en mode parallĂšle aux drains. Toutefois, les amĂ©nagements rĂ©alisĂ©s pour amĂ©liorer le drainage de surface et interne de ces sols rendent complexe l’évaluation de ces pratiques en dispositif expĂ©rimental. L’objectif principal de ce projet Ă©tait de comparer les forĂȘts d’arbres dĂ©cisionnelles (FAD) Ă  la rĂ©gression linĂ©aire multiple (RLM) pour dĂ©tecter les effets du sous-solage et des systĂšmes de drainage souterrain et de surface sur la hauteur des plants et la profondeur moyenne de la nappe durant la saison de croissance. Un essai de sous solage a Ă©tĂ© rĂ©alisĂ© Ă  l’automne 2014, dans une argilelimoneuse Kamouraska naturellement mal drainĂ©e, remodelĂ©e en planches arrondies et souffrant de compaction importante. L’essai comparait un tĂ©moin sans sous-solage Ă  quatre traitements de sous-solage, soit une sous-soleuse sur bĂ©lier ou sur tracteur, opĂ©rĂ©es parallĂšlement ou perpendiculairement aux drains. Chaque traitement a Ă©tĂ© rĂ©pĂ©tĂ© trois fois et disposĂ© alĂ©atoirement en autant de blocs. Au printemps 2016, 198 puits ont Ă©tĂ© creusĂ©s Ă  60 cm de profondeur pour enregistrer la profondeur de la nappe sous chaque traitement entre juin et juillet 2016. La photogrammĂ©trie a Ă©tĂ© utilisĂ©e pour estimer la hauteur des plants de maĂŻs. Les FAD et la RLM permettent de dĂ©tecter les principaux facteurs affectant la hauteur des plants de maĂŻs et la profondeur moyenne de la nappe, soit les amĂ©nagements antĂ©rieurs pour amĂ©liorer le drainage interne et le drainage de surface des sols. Les coefficients de dĂ©termination obtenus avec les FAD (R2 ≄ 0,94) Ă©taient toutefois plus Ă©levĂ©s que ceux obtenus avec la RLM (R2 ≄ 0,28). Aucun traitement de sous-solage n’a amĂ©liorĂ© significativement le drainage interne ni la hauteur des plants de maĂŻs par rapport au tĂ©moin sans sous-solage. Les FAD permettent en outre de mieux visualiser les relations non linĂ©aires entre les variables prĂ©dites et les autres variables, notamment la position sur la planche et la distance aux drains souterrains, et finalement de dĂ©terminer les distances aux drains souterrains optimales ( 4 m), la distance optimale Ă  la raie de curage (> 8 m) et la profondeur moyenne critique de la nappe (< 0,25 m). Les FAD permettent ainsi de prĂ©dire la hauteur des plants de maĂŻs et la profondeur moyenne de la nappe avec une plus grande prĂ©cision qu’avec la RLM

    Sustainable Agriculture and Advances of Remote Sensing (Volume 1)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publishing the results, among others
    corecore