790 research outputs found

    3D machine vision system for robotic weeding and plant phenotyping

    Get PDF
    The need for chemical free food is increasing and so is the demand for a larger supply to feed the growing global population. An autonomous weeding system should be capable of differentiating crop plants and weeds to avoid contaminating crops with herbicide or damaging them with mechanical tools. For the plant genetics industry, automated high-throughput phenotyping technology is critical to profiling seedlings at a large scale to facilitate genomic research. This research applied 2D and 3D imaging techniques to develop an innovative crop plant recognition system and a 3D holographic plant phenotyping system. A 3D time-of-flight (ToF) camera was used to develop a crop plant recognition system for broccoli and soybean plants. The developed system overcame the previously unsolved problems caused by occluded canopy and illumination variation. Both 2D and 3D features were extracted and utilized for the plant recognition task. Broccoli and soybean recognition algorithms were developed based on the characteristics of the plants. At field experiments, detection rates of over 88.3% and 91.2% were achieved for broccoli and soybean plants, respectively. The detection algorithm also reached a speed over 30 frame per second (fps), making it applicable for robotic weeding operations. Apart from applying 3D vision for plant recognition, a 3D reconstruction based phenotyping system was also developed for holographic 3D reconstruction and physical trait parameter estimation for corn plants. In this application, precise alignment of multiple 3D views is critical to the 3D reconstruction of a plant. Previously published research highlighted the need for high-throughput, high-accuracy, and low-cost 3D phenotyping systems capable of holographic plant reconstruction and plant morphology related trait characterization. This research contributed to the realization of such a system by integrating a low-cost 2D camera, a low-cost 3D ToF camera, and a chessboard-pattern beacon array to track the 3D camera\u27s position and attitude, thus accomplishing precise 3D point cloud registration from multiple views. Specifically, algorithms of beacon target detection, camera pose tracking, and spatial relationship calibration between 2D and 3D cameras were developed. The phenotypic data obtained by this novel 3D reconstruction based phenotyping system were validated by the experimental data generated by the instrument and manual measurements, showing that the system has achieved measurement accuracy of more than 90% for most cases under an average of less than five seconds processing time per plant

    Quantifying soybean phenotypes using UAV imagery and machine learning, deep learning methods

    Get PDF
    Crop breeding programs aim to introduce new cultivars to the world with improved traits to solve the food crisis. Food production should need to be twice of current growth rate to feed the increasing number of people by 2050. Soybean is one the major grain in the world and only US contributes around 35 percent of world soybean production. To increase soybean production, breeders still rely on conventional breeding strategy, which is mainly a 'trial and error' process. These constraints limit the expected progress of the crop breeding program. The goal was to quantify the soybean phenotypes of plant lodging and pubescence color using UAV-based imagery and advanced machine learning. Plant lodging and soybean pubescence color are two of the most important phenotypes for soybean breeding programs. Soybean lodging and pubescence color is conventionally evaluated visually by breeders, which is time-consuming and subjective to human errors. The goal of this study was to investigate the potential of unmanned aerial vehicle (UAV)-based imagery and machine learning in the assessment of lodging conditions and deep learning in the assessment pubescence color of soybean breeding lines. A UAV imaging system equipped with an RGB (red-green-blue) camera was used to collect the imagery data of 1,266 four-row plots in a soybean breeding field at the reproductive stage. Soybean lodging scores and pubescence scores were visually assessed by experienced breeders. Lodging scores were grouped into four classes, i.e., non-lodging, moderate lodging, high lodging, and severe lodging. In contrast, pubescence color scores were grouped into three classes, i.e., gray, tawny, and segregation. UAV images were stitched to build orthomosaics, and soybean plots were segmented using a grid method. Twelve image features were extracted from the collected images to assess the lodging scores of each breeding line. Four models, i.e., extreme gradient boosting (XGBoost), random forest (RF), K-nearest neighbor (KNN), and artificial neural network (ANN), were evaluated to classify soybean lodging classes. Five data pre-processing methods were used to treat the imbalanced dataset to improve the classification accuracy. Results indicate that the pre-processing method SMOTE-ENN consistently performs well for all four (XGBoost, RF, KNN, and ANN) classifiers, achieving the highest overall accuracy (OA), lowest misclassification, higher F1-score, and higher Kappa coefficient. This suggests that Synthetic Minority Over-sampling-Edited Nearest Neighbor (SMOTE-ENN) may be an excellent pre-processing method for using unbalanced datasets and classification tasks. Furthermore, an overall accuracy of 96 percent was obtained using the SMOTE-ENN dataset and ANN classifier. On the other hand, to classify the soybean pubescence color, seven pre-trained deep learning models, i.e., DenseNet121, DenseNet169, DenseNet201, ResNet50, InceptionResNet-V2, Inception-V3, and EfficientNet were used, and images of each plot were fed into the model. Data was enhanced using two rotational and two scaling factors to increase the datasets. Among the seven pre-trained deep learning models, ResNet50 and DenseNet121 classifiers showed a higher overall accuracy of 88 percent, along with higher precision, recall, and F1-score for all three classes of pubescence color. In conclusion, the developed UAV-based high-throughput phenotyping system can gather image features to estimate soybean crucial phenotypes and classify the phenotypes, which will help the breeders in phenotypic variations in breeding trials. Also, the RGB imagery-based classification could be a cost-effective choice for breeders and associated researchers for plant breeding programs in identifying superior genotypes.Includes bibliographical references

    Phenotypic and molecular characterization root system architecture in diverse soybean (Glycine max L. Merr.) accessions

    Get PDF
    Root system architecture (RSA), or the spatial arrangement of the root and its morphology, functions to anchor the plant, provide water and nutrient acquisition, nutrient storage and to facilitate plant-microbe interactions such as nodulation in legumes such as soybean [Glycine max L. Merr.)]. Root structure also correlates to environmental advantages, such as nutrient acquisition, drought, flood tolerance, and lodging resistance. After centuries of indirect selection for RSA, there is a focus to harness soybean RSA diversity for exploitation and implementation into cultivar development programs. Researchers have generally taken one of three strategies to approach root phenotyping including controlled laboratory, moderately controlled greenhouse and minimally controlled field methods. In this study we developed a mobile, low-cost, and high-resolution root phenotyping system composed of an imaging platform with computer vision and ML based approaches to establish a seamless end-to-end pipeline. This system provides a high-throughput, cost effective, non-destructive methodology that delivers biologically relevant time-series data on root growth and development for phenomics, genomics, and plant breeding applications. We customized a previous version of the Automated Root Imaging Analysis root phenotyping software. New modifications to the workflow allow integrates time series image capture coupled with automated image processing that uses optical character recognition to identify barcodes, followed by segmentation using a convolutional neural network. The goal of this research was to study the root trait genetic diversity in soybean using 292 soybean accessions from the USDA core collection primarily in maturity group II and III and a subset of the soybean nested association mapping (NAM) parents. Combining 35,448 SNPs with a semi-automated phenotyping platform, these 292 accessions were studied for RSA traits to decipher the genetic diversity and explore informative root (iRoot) categories based on current literature for root shape categories. Genotype- and phenotype-based hierarchical clusters were found from the diverse set with significant correlations. Genotype based clusters correlated with geographical origins, and genetic differentiation indicated that much of US origin genotypes do not possess genetic diversity for RSA traits. Results show that superior root performance and root shape also correlate to specific genomic clusters. This combination of genetic and phenotypic analyses results provides opportunities for targeted breeding efforts to maximize the beneficial genetic diversity for future genetic gains. Further objectives of this study was to identify genetic control of RSA within the diverse soybean landscape as well as determine whether a genomic prediction could be a viable strategy for breeding for root architecture traits. The GWAS detected 30 SNPs which co-located within previously identified QTL for root traits and identified a number of root development gene candidates. The GP model is capable of predicting phenotypes based on genomic data allowing selection of individuals with root traits of interest within the core collection without utilizing phenotypic data. Plant phenomics coupled with molecular technologies and statistical approaches identify genotypes with favorable or unfavorable traits, allowing for inexpensive selections prior to field trial phenotyping. Employment of these genomic and phenomic technologies will allow soybean breeders to vastly expand the scope of a breeding program

    Computer vision and machine learning enabled soybean root phenotyping pipeline

    Get PDF
    Background Root system architecture (RSA) traits are of interest for breeding selection; however, measurement of these traits is difficult, resource intensive, and results in large variability. The advent of computer vision and machine learning (ML) enabled trait extraction and measurement has renewed interest in utilizing RSA traits for genetic enhancement to develop more robust and resilient crop cultivars. We developed a mobile, low-cost, and high-resolution root phenotyping system composed of an imaging platform with computer vision and ML based segmentation approach to establish a seamless end-to-end pipeline - from obtaining large quantities of root samples through image based trait processing and analysis. Results This high throughput phenotyping system, which has the capacity to handle hundreds to thousands of plants, integrates time series image capture coupled with automated image processing that uses optical character recognition (OCR) to identify seedlings via barcode, followed by robust segmentation integrating convolutional auto-encoder (CAE) method prior to feature extraction. The pipeline includes an updated and customized version of the Automatic Root Imaging Analysis (ARIA) root phenotyping software. Using this system, we studied diverse soybean accessions from a wide geographical distribution and report genetic variability for RSA traits, including root shape, length, number, mass, and angle. Conclusions This system provides a high-throughput, cost effective, non-destructive methodology that delivers biologically relevant time-series data on root growth and development for phenomics, genomics, and plant breeding applications. This phenotyping platform is designed to quantify root traits and rank genotypes in a common environment thereby serving as a selection tool for use in plant breeding. Root phenotyping platforms and image based phenotyping are essential to mirror the current focus on shoot phenotyping in breeding efforts

    Trait-Based Root Phenotyping as a Necessary Tool for Crop Selection and Improvement

    Get PDF
    Most of the effort of crop breeding has focused on the expression of aboveground traits with the goals of increasing yield and disease resistance, decreasing height in grains, and improvement of nutritional qualities. The role of roots in supporting these goals has been largely ignored. With the increasing need to produce more food, feed, fiber, and fuel on less land and with fewer inputs, the next advance in plant breeding must include greater consideration of roots. Root traits are an untapped source of phenotypic variation that will prove essential for breeders working to increase yields and the provisioning of ecosystem services. Roots are dynamic, and their structure and the composition of metabolites introduced to the rhizosphere change as the plant develops and in response to environmental, biotic, and edaphic factors. The assessment of physical qualities of root system architecture will allow breeding for desired root placement in the soil profile, such as deeper roots in no-till production systems plagued with drought or shallow roots systems for accessing nutrients. Combining the assessment of physical characteristics with chemical traits, including enzymes and organic acid production, will provide a better understanding of biogeochemical mechanisms by which roots acquire resources. Lastly, information on the structural and elemental composition of the roots will help better predict root decomposition, their contribution to soil organic carbon pools, and the subsequent benefits provided to the following crop. Breeding can no longer continue with a narrow focus on aboveground traits, and breeding for belowground traits cannot only focus on root system architecture. Incorporation of root biogeochemical traits into breeding will permit the creation of germplasm with the required traits to meet production needs in a variety of soil types and projected climate scenarios

    On the Use of Unmanned Aerial Systems for Environmental Monitoring

    Get PDF
    Environmental monitoring plays a central role in diagnosing climate and management impacts on natural and agricultural systems; enhancing the understanding of hydrological processes; optimizing the allocation and distribution of water resources; and assessing, forecasting, and even preventing natural disasters. Nowadays, most monitoring and data collection systems are based upon a combination of ground-based measurements, manned airborne sensors, and satellite observations. These data are utilized in describing both small- and large-scale processes, but have spatiotemporal constraints inherent to each respective collection system. Bridging the unique spatial and temporal divides that limit current monitoring platforms is key to improving our understanding of environmental systems. In this context, Unmanned Aerial Systems (UAS) have considerable potential to radically improve environmental monitoring. UAS-mounted sensors offer an extraordinary opportunity to bridge the existing gap between field observations and traditional air- and space-borne remote sensing, by providing high spatial detail over relatively large areas in a cost-effective way and an entirely new capacity for enhanced temporal retrieval. As well as showcasing recent advances in the field, there is also a need to identify and understand the potential limitations of UAS technology. For these platforms to reach their monitoring potential, a wide spectrum of unresolved issues and application-specific challenges require focused community attention. Indeed, to leverage the full potential of UAS-based approaches, sensing technologies, measurement protocols, postprocessing techniques, retrieval algorithms, and evaluation techniques need to be harmonized. The aim of this paper is to provide an overview of the existing research and applications of UAS in natural and agricultural ecosystem monitoring in order to identify future directions, applications, developments, and challengespublishersversionPeer reviewe

    Sustainable Agriculture and Advances of Remote Sensing (Volume 1)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publishing the results, among others

    Vulnerability and use of Ground and Surface Waters in the Southern Mississippi Valley Region

    Get PDF
    There is a concern in the Southern Mississippi River Valley of the United States over non-point source pollution of ground and surface waters resulting from activities associated with agricultural production. This agriculturally intensive region consists of two major land resource areas (MLRAs): Southern Mississippi Valley Silty Uplands (MLRA 134) and the Southern Mississippi Valley Alluvium (MLRA 131). Both MLRAs have level to undulating and rolling topography, relatively fertile soils and a climate particularly conducive for row crop production
    corecore