7,978 research outputs found

    High-throughput phenotyping technology for corn ears

    Get PDF
    The phenotype of any organism, or as in this case, plants, includes traits or characteristics that can be measured using a technical procedure. Phenotyping is an important activity in plant breeding, since it gives breeders an observable representation of the plant’s genetic code, which is called the genotype. The word phenotype originates from the Greek word “phainein” which means “to show” and the word “typos” which means “type”. Ideally, the development of phenotyping technologies should be in lockstep with genotyping technologies, but unfortunately it is not; currently there exists a major discrepancy between the technological sophistication of genotyping versus phenotyping, and the gap is getting wider. Whereas genotyping has become a high-throughput low-cost standardized procedure, phenotyping still comprises ample manual measurements which are time consuming, tedious, and error prone. The project as conducted here aims at alleviating this problem; To aid breeders, a method was devised that allows for high-throughput phenotyping of corn ears, based on an existing imaging arrangement that produces frontal views of the ears. This thesis describes the development of machine vision algorithms that measure overall ear parameters such as ear length, ear diameter, and cap percentage (the proportion of the ear that features kernels versus the barren area). The main image processing functions used here were segmentation, skewness correction, morphological operation and image registration. To obtain a kernel count, an “ear map” was constructed using both a morphological operation and a feature matching operation. The main challenge for the morphological operation was to accurately select only kernel rows that are frontally exposed in each single image. This issue is addressed in this project by developing an algorithm of shadow recognition. The main challenge for the feature-matching operation was to detect and match image feature points. This issue was addressed by applying the algorithms of Harris’s Conner detection and SIFT descriptor. Once the ear map is created, many other morphological kernel parameters (area, location, circumference, to name a few) can be determined. Remaining challenges in this research are pointed out, including sample choice, apparatus modification and algorithm improvement. Suggestions and recommendations for future work are also provided

    Convolutional Neural Net-Based Cassava Storage Root Counting Using Real and Synthetic Images

    Get PDF
    © Copyright © 2019 Atanbori, Montoya-P, Selvaraj, French and Pridmore. Cassava roots are complex structures comprising several distinct types of root. The number and size of the storage roots are two potential phenotypic traits reflecting crop yield and quality. Counting and measuring the size of cassava storage roots are usually done manually, or semi-automatically by first segmenting cassava root images. However, occlusion of both storage and fibrous roots makes the process both time-consuming and error-prone. While Convolutional Neural Nets have shown performance above the state-of-the-art in many image processing and analysis tasks, there are currently a limited number of Convolutional Neural Net-based methods for counting plant features. This is due to the limited availability of data, annotated by expert plant biologists, which represents all possible measurement outcomes. Existing works in this area either learn a direct image-to-count regressor model by regressing to a count value, or perform a count after segmenting the image. We, however, address the problem using a direct image-to-count prediction model. This is made possible by generating synthetic images, using a conditional Generative Adversarial Network (GAN), to provide training data for missing classes. We automatically form cassava storage root masks for any missing classes using existing ground-truth masks, and input them as a condition to our GAN model to generate synthetic root images. We combine the resulting synthetic images with real images to learn a direct image-to-count prediction model capable of counting the number of storage roots in real cassava images taken from a low cost aeroponic growth system. These models are used to develop a system that counts cassava storage roots in real images. Our system first predicts age group ('young' and 'old' roots; pertinent to our image capture regime) in a given image, and then, based on this prediction, selects an appropriate model to predict the number of storage roots. We achieve 91% accuracy on predicting ages of storage roots, and 86% and 71% overall percentage agreement on counting 'old' and 'young' storage roots respectively. Thus we are able to demonstrate that synthetically generated cassava root images can be used to supplement missing root classes, turning the counting problem into a direct image-to-count prediction task

    Computer vision and machine learning enabled soybean root phenotyping pipeline

    Get PDF
    Background Root system architecture (RSA) traits are of interest for breeding selection; however, measurement of these traits is difficult, resource intensive, and results in large variability. The advent of computer vision and machine learning (ML) enabled trait extraction and measurement has renewed interest in utilizing RSA traits for genetic enhancement to develop more robust and resilient crop cultivars. We developed a mobile, low-cost, and high-resolution root phenotyping system composed of an imaging platform with computer vision and ML based segmentation approach to establish a seamless end-to-end pipeline - from obtaining large quantities of root samples through image based trait processing and analysis. Results This high throughput phenotyping system, which has the capacity to handle hundreds to thousands of plants, integrates time series image capture coupled with automated image processing that uses optical character recognition (OCR) to identify seedlings via barcode, followed by robust segmentation integrating convolutional auto-encoder (CAE) method prior to feature extraction. The pipeline includes an updated and customized version of the Automatic Root Imaging Analysis (ARIA) root phenotyping software. Using this system, we studied diverse soybean accessions from a wide geographical distribution and report genetic variability for RSA traits, including root shape, length, number, mass, and angle. Conclusions This system provides a high-throughput, cost effective, non-destructive methodology that delivers biologically relevant time-series data on root growth and development for phenomics, genomics, and plant breeding applications. This phenotyping platform is designed to quantify root traits and rank genotypes in a common environment thereby serving as a selection tool for use in plant breeding. Root phenotyping platforms and image based phenotyping are essential to mirror the current focus on shoot phenotyping in breeding efforts

    SeedGerm: a cost‐effective phenotyping platform for automated seed imaging and machine‐learning based phenotypic analysis of crop seed germination

    Get PDF
    Efficient seed germination and establishment are important traits for field and glasshouse crops. Large-scale germination experiments are laborious and prone to observer errors, leading to the necessity for automated methods. We experimented with five crop species, including tomato, pepper, Brassica, barley, and maize, and concluded an approach for large-scale germination scoring. Here, we present the SeedGerm system, which combines cost-effective hardware and open-source software for seed germination experiments, automated seed imaging, and machine-learning based phenotypic analysis. The software can process multiple image series simultaneously and produce reliable analysis of germination- and establishment-related traits, in both comma-separated values (CSV) and processed images (PNG) formats. In this article, we describe the hardware and software design in detail. We also demonstrate that SeedGerm could match specialists’ scoring of radicle emergence. Germination curves were produced based on seed-level germination timing and rates rather than a fitted curve. In particular, by scoring germination across a diverse panel of Brassica napus varieties, SeedGerm implicates a gene important in abscisic acid (ABA) signalling in seeds. We compared SeedGerm with existing methods and concluded that it could have wide utilities in large-scale seed phenotyping and testing, for both research and routine seed technology applications

    Fully-automated root image analysis (faRIA)

    Get PDF
    High-throughput root phenotyping in the soil became an indispensable quantitative tool for the assessment of effects of climatic factors and molecular perturbation on plant root morphology, development and function. To efficiently analyse a large amount of structurally complex soil-root images advanced methods for automated image segmentation are required. Due to often unavoidable overlap between the intensity of fore- and background regions simple thresholding methods are, generally, not suitable for the segmentation of root regions. Higher-level cognitive models such as convolutional neural networks (CNN) provide capabilities for segmenting roots from heterogeneous and noisy background structures, however, they require a representative set of manually segmented (ground truth) images. Here, we present a GUI-based tool for fully automated quantitative analysis of root images using a pre-trained CNN model, which relies on an extension of the U-Net architecture. The developed CNN framework was designed to efficiently segment root structures of different size, shape and optical contrast using low budget hardware systems. The CNN model was trained on a set of 6465 masks derived from 182 manually segmented near-infrared (NIR) maize root images. Our experimental results show that the proposed approach achieves a Dice coefficient of 0.87 and outperforms existing tools (e.g., SegRoot) with Dice coefficient of 0.67 by application not only to NIR but also to other imaging modalities and plant species such as barley and arabidopsis soil-root images from LED-rhizotron and UV imaging systems, respectively. In summary, the developed software framework enables users to efficiently analyse soil-root images in an automated manner (i.e. without manual interaction with data and/or parameter tuning) providing quantitative plant scientists with a powerful analytical tool. © 2021, The Author(s)

    Deep machine learning provides state-of-the art performance in image-based plant phenotyping

    Get PDF
    Deep learning is an emerging field that promises unparalleled results on many data analysis problems. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping, and demonstrate state-of-the-art results for root and shoot feature identification and localisation. We predict a paradigm shift in image-based phenotyping thanks to deep learning approaches
    corecore