115 research outputs found

    Constraint-based automated reconstruction of grape bunches from 3D range data for high-throughput phenotyping

    Get PDF
    With increasing global population, the resources for agriculture required to feed the growing number of people are becoming scarce. Estimates expect that by 2050, 60 % more food will be necessary. Nowadays, 70 % of fresh water is used by agriculture and experts see no potential for new land to use for crop plants. This means that existing land has to be used efficiently in a sustainable way. To support this, plant breeders aim at the improvement of yield, quality, disease-resistance, and other important characteristics of the crops. Reports show that grapevine cultivation uses more than three times of the amount of fungicides than the cultivation of fruit trees or vegetables. This is caused by grapevine being prone to various fungal diseases and pests that quickly spread over fields. A loose grape bunch architecture is one of the most important physical barriers that make the establishment of a fungal infection less likely. The grape bunch architecture is mostly defined by the inner stem skeleton. The phenotyping of grape bunches refers to the measurement of the phenotypes, i.e., the observable traits of a plant, like the diameter of berries or the lengths of stems. Because of their perishable nature, grape bunches have to be processed in a relatively short time. On the other hand, genetic analyses require data from a large number of them. Manual phenotyping is error-prone and highly labor- and time-intensive, yielding the need for automated, high-throughput methods. The objective of this thesis is to develop a completely automated pipeline that gets as input a 3D pointcloud showing a grape bunch and computes a 3D reconstruction of the complete grape bunch, including the inner stem skeleton. The result is a 3D estimation of the grape bunch that represents not only dimensions (e.g. berry diameters) or statistics (e.g. the number of berries), but the geometry and topology as well. All architectural (i.e., geometrical and topological) traits can be derived from this complete 3D reconstruction. We aim at high-throughput phenotyping by automatizing all steps and removing any requirement for interaction with the user, while still providing an interface for a detailed visualization and possible adjustments of the parameters. There are several challenges to this task: ripe grape bunches are subject to a high amount of self-occlusion, rendering a direct reconstruction of the stem skeleton impossible. The stem skeleton structure is complex, thus, the manual creation of training data is hard. We aim at a cross-cultivation approach and there is high variability between cultivars and even between grape bunches of the same cultivar. Thus, we cannot rely on statistical distributions for single plant organ dimensions. We employ geometrical and topological constraints to meet the challenge of cross-cultivar optimization and foster efficient sampling of infinitely large hypotheses spaces, resulting in Pearson correlation coefficients between 0.7 and 0.9 for established traits traditionally used by breeders. The active working time is reduced by a factor of 12. We evaluate the pipeline for the application on scans taken in a lab environment and in the field

    Machine Learning-Based Plant Detection Algorithms to Automate Counting Tasks Using 3D Canopy Scans

    Get PDF
    This study tested whether machine learning (ML) methods can effectively separate individual plants from complex 3D canopy laser scans as a prerequisite to analyzing particular plant features. For this, we scanned mung bean and chickpea crops with PlantEye (R) laser scanners. Firstly, we segmented the crop canopies from the background in 3D space using the Region Growing Segmentation algorithm. Then, Convolutional Neural Network (CNN) based ML algorithms were fine-tuned for plant counting. Application of the CNN-based (Convolutional Neural Network) processing architecture was possible only after we reduced the dimensionality of the data to 2D. This allowed for the identification of individual plants and their counting with an accuracy of 93.18% and 92.87% for mung bean and chickpea plants, respectively. These steps were connected to the phenotyping pipeline, which can now replace manual counting operations that are inefficient, costly, and error-prone. The use of CNN in this study was innovatively solved with dimensionality reduction, addition of height information as color, and consequent application of a 2D CNN-based approach. We found there to be a wide gap in the use of ML on 3D information. This gap will have to be addressed, especially for more complex plant feature extractions, which we intend to implement through further research. © 2021 by the authors. Licensee MDPI, Basel, Switzerland

    Fruit sizing using AI: A review of methods and challenges

    Get PDF
    Fruit size at harvest is an economically important variable for high-quality table fruit production in orchards and vineyards. In addition, knowing the number and size of the fruit on the tree is essential in the framework of precise production, harvest, and postharvest management. A prerequisite for analysis of fruit in a real-world environment is the detection and segmentation from background signal. In the last five years, deep learning convolutional neural network have become the standard method for automatic fruit detection, achieving F1-scores higher than 90 %, as well as real-time processing speeds. At the same time, different methods have been developed for, mainly, fruit size and, more rarely, fruit maturity estimation from 2D images and 3D point clouds. These sizing methods are focused on a few species like grape, apple, citrus, and mango, resulting in mean absolute error values of less than 4 mm in apple fruit. This review provides an overview of the most recent methodologies developed for in-field fruit detection/counting and sizing as well as few upcoming examples of maturity estimation. Challenges, such as sensor fusion, highly varying lighting conditions, occlusions in the canopy, shortage of public fruit datasets, and opportunities for research transfer, are discussed.This work was partly funded by the Department of Research and Universities of the Generalitat de Catalunya (grants 2017 SGR 646 and 2021 LLAV 00088) and by the Spanish Ministry of Science and Innovation / AEI/10.13039/501100011033 / FEDER (grants RTI2018-094222-B-I00 [PAgFRUIT project] and PID2021-126648OB-I00 [PAgPROTECT project]). The Secretariat of Universities and Research of the Department of Business and Knowledge of the Generalitat de Catalunya and European Social Fund (ESF) are also thanked for financing Juan Carlos Miranda’s pre-doctoral fellowship (2020 FI_B 00586). The work of Jordi Gené-Mola was supported by the Spanish Ministry of Universities through a Margarita Salas postdoctoral grant funded by the European Union - NextGenerationEU.info:eu-repo/semantics/publishedVersio

    Deep neural networks for grape bunch segmentation in natural images from a consumer-grade camera

    Get PDF
    AbstractPrecision agriculture relies on the availability of accurate knowledge of crop phenotypic traits at the sub-field level. While visual inspection by human experts has been traditionally adopted for phenotyping estimations, sensors mounted on field vehicles are becoming valuable tools to increase accuracy on a narrower scale and reduce execution time and labor costs, as well. In this respect, automated processing of sensor data for accurate and reliable fruit detection and characterization is a major research challenge, especially when data consist of low-quality natural images. This paper investigates the use of deep learning frameworks for automated segmentation of grape bunches in color images from a consumer-grade RGB-D camera, placed on-board an agricultural vehicle. A comparative study, based on the estimation of two image segmentation metrics, i.e. the segmentation accuracy and the well-known Intersection over Union (IoU), is presented to estimate the performance of four pre-trained network architectures, namely the AlexNet, the GoogLeNet, the VGG16, and the VGG19. Furthermore, a novel strategy aimed at improving the segmentation of bunch pixels is proposed. It is based on an optimal threshold selection of the bunch probability maps, as an alternative to the conventional minimization of cross-entropy loss of mutually exclusive classes. Results obtained in field tests show that the proposed strategy improves the mean segmentation accuracy of the four deep neural networks in a range between 2.10 and 8.04%. Besides, the comparative study of the four networks demonstrates that the best performance is achieved by the VGG19, which reaches a mean segmentation accuracy on the bunch class of 80.58%, with IoU values for the bunch class of 45.64%

    Data synthesis methods for semantic segmentation in agriculture : A Capsicum annuum dataset

    Get PDF
    This paper provides synthesis methods for large-scale semantic image segmentation datasets of agricultural scenes with the objective to bridge the gap between state-of-the art computer vision performance and that of computer vision in the agricultural robotics domain. We propose a novel methodology to generate renders of random meshes of plants based on empirical measurements, including the automated generation per-pixel class and depth labels for multiple plant parts. A running example is given of Capsicum annuum (sweet or bell pepper) in a high-tech greenhouse. A synthetic dataset of 10,500 images was rendered through Blender, using scenes with 42 procedurally generated plant models with randomised plant parameters. These parameters were based on 21 empirically measured plant properties at 115 positions on 15 plant stems. Fruit models were obtained by 3D scanning and plant part textures were gathered photographically. As reference dataset for modelling and evaluate segmentation performance, 750 empirical images of 50 plants were collected in a greenhouse from multiple angles and distances using image acquisition hardware of a sweet pepper harvest robot prototype. We hypothesised high similarity between synthetic images and empirical images, which we showed by analysing and comparing both sets qualitatively and quantitatively. The sets and models are publicly released with the intention to allow performance comparisons between agricultural computer vision methods, to obtain feedback for modelling improvements and to gain further validations on usability of synthetic bootstrapping and empirical fine-tuning. Finally, we provide a brief perspective on our hypothesis that related synthetic dataset bootstrapping and empirical fine-tuning can be used for improved learning.</p

    Field-based architectural traits characterisation of maize plant using time-of-flight 3D imaging

    Get PDF
    Maize (Zea mays L.) is one of the most economically important cereal crops. Though time-consuming and labour-intensive, manually measuring phenotypic traits in the field has been the common practice for maize breeding programs. This study presents a system for automated characterisation of several important plant architectural traits of maize plants under field conditions. An algorithm was developed to extract 3D plant skeletons from point cloud data acquired by side-viewing Time-of-Flight cameras. Plants were detected as 3D lines by Hough transform of the skeleton nodes. By analysing the graph structure of the skeletons with respect to the 3D lines, the point cloud was partitioned into plant instances with the stems and the leaves separated. Furthermore, plant height, plant orientation, leaf angle, and stem diameter were extracted for each plant. The image-derived estimates of traits were compared to manual measurements at multiple growth stages. Satisfactory accuracies in terms of mean absolute error (MAE) and coefficient of determination (R2) were achieved for plant height (before flowering: MAE 0.15 m, R2 0.96; after flowering: MAE 0.054 m, R2 0.83), leaf angle (MAE 2.8°, R2 0.83), and plant orientation (MAE 13°), except for stem diameter due to the limitations of the depth sensor. The results showed that the system was robust and accurate when the plants were imaged from only one side despite occlusions caused by leaves, and the method was applicable to maize plants from an early growth stage to full maturity
    corecore