921 research outputs found

    High-Throughput System for the Early Quantification of Major Architectural Traits in Olive Breeding Trials Using UAV Images and OBIA Techniques

    Get PDF
    The need for the olive farm modernization have encouraged the research of more efficient crop management strategies through cross-breeding programs to release new olive cultivars more suitable for mechanization and use in intensive orchards, with high quality production and resistance to biotic and abiotic stresses. The advancement of breeding programs are hampered by the lack of efficient phenotyping methods to quickly and accurately acquire crop traits such as morphological attributes (tree vigor and vegetative growth habits), which are key to identify desirable genotypes as early as possible. In this context, an UAV-based high-throughput system for olive breeding program applications was developed to extract tree traits in large-scale phenotyping studies under field conditions. The system consisted of UAV-flight configurations, in terms of flight altitude and image overlaps, and a novel, automatic, and accurate object-based image analysis (OBIA) algorithm based on point clouds, which was evaluated in two experimental trials in the framework of a table olive breeding program, with the aim to determine the earliest date for suitable quantifying of tree architectural traits. Two training systems (intensive and hedgerow) were evaluated at two very early stages of tree growth: 15 and 27 months after planting. Digital Terrain Models (DTMs) were automatically and accurately generated by the algorithm as well as every olive tree identified, independently of the training system and tree age. The architectural traits, specially tree height and crown area, were estimated with high accuracy in the second flight campaign, i.e. 27 months after planting. Differences in the quality of 3D crown reconstruction were found for the growth patterns derived from each training system. These key phenotyping traits could be used in several olive breeding programs, as well as to address some agronomical goals. In addition, this system is cost and time optimized, so that requested architectural traits could be provided in the same day as UAV flights. This high-throughput system may solve the actual bottleneck of plant phenotyping of "linking genotype and phenotype," considered a major challenge for crop research in the 21st century, and bring forward the crucial time of decision making for breeders

    Performances Evaluation of a Low-Cost Platform for High-Resolution Plant Phenotyping

    Get PDF
    This study aims to test the performances of a low-cost and automatic phenotyping platform, consisting of a Red-Green-Blue (RGB) commercial camera scanning objects on rotating plates and the reconstruction of main plant phenotypic traits via the structure for motion approach (SfM). The precision of this platform was tested in relation to three-dimensional (3D) models generated from images of potted maize, tomato and olive tree, acquired at a different frequency (steps of 4°, 8° and 12°) and quality (4.88, 6.52 and 9.77 µm/pixel). Plant and organs heights, angles and areas were extracted from the 3D models generated for each combination of these factors. Coefficient of determination (R2), relative Root Mean Square Error (rRMSE) and Akaike Information Criterion (AIC) were used as goodness-of-fit indexes to compare the simulated to the observed data. The results indicated that while the best performances in reproducing plant traits were obtained using 90 images at 4.88 µm/pixel (R2 = 0.81, rRMSE = 9.49% and AIC = 35.78), this corresponded to an unviable processing time (from 2.46 h to 28.25 h for herbaceous plants and olive trees, respectively). Conversely, 30 images at 4.88 µm/pixel resulted in a good compromise between a reliable reconstruction of considered traits (R2 = 0.72, rRMSE = 11.92% and AIC = 42.59) and processing time (from 0.50 h to 2.05 h for herbaceous plants and olive trees, respectively). In any case, the results pointed out that this input combination may vary based on the trait under analysis, which can be more or less demanding in terms of input images and time according to the complexity of its shape (R2 = 0.83, rRSME = 10.15% and AIC = 38.78). These findings highlight the reliability of the developed low-cost platform for plant phenotyping, further indicating the best combination of factors to speed up the acquisition and elaboration process, at the same time minimizing the bias between observed and simulated data

    Approaches to three-dimensional reconstruction of plant shoot topology and geometry

    Get PDF
    There are currently 805 million people classified as chronically undernourished, and yet the World’s population is still increasing. At the same time, global warming is causing more frequent and severe flooding and drought, thus destroying crops and reducing the amount of land available for agriculture. Recent studies show that without crop climate adaption, crop productivity will deteriorate. With access to 3D models of real plants it is possible to acquire detailed morphological and gross developmental data that can be used to study their ecophysiology, leading to an increase in crop yield and stability across hostile and changing environments. Here we review approaches to the reconstruction of 3D models of plant shoots from image data, consider current applications in plant and crop science, and identify remaining challenges. We conclude that although phenotyping is receiving an increasing amount of attention – particularly from computer vision researchers – and numerous vision approaches have been proposed, it still remains a highly interactive process. An automated system capable of producing 3D models of plants would significantly aid phenotyping practice, increasing accuracy and repeatability of measurements

    Development of an Automatic Maize Seedling Phenotyping Platform Using 3D Vision and Industrial Robot Arm

    Get PDF
    Corp breeding plays an important role in modern agriculture, improving plant adaptability and increase yield. Optimizing genes is the key step to discover the beneficial genetic traits for crop production increasing. Associating genes and their functions needs a mountain of observation and measurement of the phenotypes, which is a dreary and fallible job for human beings. Automatic seedling phenotyping system aims at replacing the manual measurement, reduce the sampling time and increase the allowable work time. In this research, we developed an automated maize seedling phenotyping platform based on a ToF camera and an industrial robot arm. A ToF camera is mounted on the end-effector of the robot arm. The arm brings ToF camera to different viewpoints for acquiring 3D data. Camera-to-arm transformation matrix is calculated from hand-eye calibration, which is applied to transfer different viewpoint into arm base coordinate frame. Filters remove the background and noise in the merged seedling point clouds. 3D-to-2D projection and x-axis pixels density distribution method is used to segment the stem and leaves. Finally, separated leaves are fitted with 3D curves for parameter measurement. In testing experiment, 60 maize plants at early growth stage (V2~V5) were sampled by this platform

    Development of a robotic platform for maize functional genomics research

    Get PDF
    The food supply requirement of a growing global population leads to an increasing demand for agricultural crops. Without enlarging the current cultivated area, the only way to satisfy the needs of increasing food demand is to improve the yield per acre. Production, fertilization, and choosing productive crops are feasible approaches. How to pick the beneficial genotypes turns out to be a genetic optimization problem, so a biological tool is needed to study the function of crop genes and for the particular purpose of identifying genes important for agronomy traits. Virus-induced gene silencing (VIGS) can be used as such a tool by knocking down gene expression of genes to test their functions. The use of VIGS and other functional genomics approaches in corn plants has increased the need for determining how to rapidly associate genes with traits. A significant amount of observation, comparison, and data analysis are required for such corn genetic studies. An autonomous maize functional genomics system with the capacity to collect data collection, measure parameters, and identify virus-plants should be developed. This research project established a system combining sensors with customized algorithms that can distinguish a viral infected plant and measure parameters of maize plants. An industrial robot arm was used to collect data in multiple views with 3D sensors. Hand-eye calibration between a 2D color camera and the robot arm was performed to transform different camera coordinates into arm-based coordinates. TCP socket-based software written in Visual C ++ was developed at both the robot arm side and the PC side to perform behavioral bidirectional real-time communication. A 3D time-of-flight (ToF) camera was used to reconstruct the corn plant model. The point clouds of corn plants, in different views, were merged into one representation through a homogeneous transform matrix. Functions of a pass-through filter and a statistical outlier removal filter were called from the Point Cloud Library to remove background and random noise. An algorithm for leaf and stem segmentation based on the morphological characteristics of corn plants was developed. A least-squares method was used to fit the skeletons of leaves for computation of parameters such as leaf length and numbers. After locating the leaf center, the arm is made ready to position the 2D camera for color imaging. Color-based segmentation was applied to pick up a rectangular interest of area on the leaf image. The algorithm computing the Gray-Level Co-occurrence Matrix (GLCM) value of the leaf image was implemented using the OPENCV library. After training, Bayes classification was used to identify the infected corn plant leaf based on GLCM value. The System User Interface is capable of generating data collection commands, 3D reconstruction, parameter table output, color image acquisition control, specific leaf-probing and infected corn leaf diagnosis. This application was developed under a Qt cross-platform environment with multithreading between tasks, making the interface user-friendly and efficient

    Fully-automated root image analysis (faRIA)

    Get PDF
    High-throughput root phenotyping in the soil became an indispensable quantitative tool for the assessment of effects of climatic factors and molecular perturbation on plant root morphology, development and function. To efficiently analyse a large amount of structurally complex soil-root images advanced methods for automated image segmentation are required. Due to often unavoidable overlap between the intensity of fore- and background regions simple thresholding methods are, generally, not suitable for the segmentation of root regions. Higher-level cognitive models such as convolutional neural networks (CNN) provide capabilities for segmenting roots from heterogeneous and noisy background structures, however, they require a representative set of manually segmented (ground truth) images. Here, we present a GUI-based tool for fully automated quantitative analysis of root images using a pre-trained CNN model, which relies on an extension of the U-Net architecture. The developed CNN framework was designed to efficiently segment root structures of different size, shape and optical contrast using low budget hardware systems. The CNN model was trained on a set of 6465 masks derived from 182 manually segmented near-infrared (NIR) maize root images. Our experimental results show that the proposed approach achieves a Dice coefficient of 0.87 and outperforms existing tools (e.g., SegRoot) with Dice coefficient of 0.67 by application not only to NIR but also to other imaging modalities and plant species such as barley and arabidopsis soil-root images from LED-rhizotron and UV imaging systems, respectively. In summary, the developed software framework enables users to efficiently analyse soil-root images in an automated manner (i.e. without manual interaction with data and/or parameter tuning) providing quantitative plant scientists with a powerful analytical tool. © 2021, The Author(s)
    • …
    corecore