18 research outputs found

    Grapevine yield prediction using image analysis - improving the estimation of non-visible bunches

    Get PDF
    Yield forecast is an issue of utmost importance for the entire grape and wine sectors. There are several methods for vineyard yield estimation. The ones based on estimating yield components are the most commonly used in commercial vineyards. Those methods are generally destructive and very labor intensive and can provide inaccurate results as they are based on the assessment of a small sample of bunches. Recently, several attempts have been made to apply image analysis technologies for bunch and/or berries recognition in digital images. Nonetheless, the effectiveness of image analysis in predicting yield is strongly dependent of grape bunch visibility, which is dependent on canopy density at fruiting zone and on bunch number, density and dimensions. In this work data on bunch occlusion obtained in a field experiment is presented. This work is set-up in the frame of a research project aimed at the development of an unmanned ground vehicle to scout vineyards for non-intrusive estimation of canopy features and grape yield. The objective is to evaluate the use of explanatory variables to estimate the fraction of non-visible bunches (bunches occluded by leaves). In the future, this estimation can potentially improve the accuracy of a computer vision algorithm used by the robot to estimate total yield. In two vineyard plots with Encruzado (white) and Syrah (red) varieties, several canopy segments of 1 meter length were photographed with a RGB camera and a blue background, close to harvest date. Out of these images, canopy gaps (porosity) and bunches’ region of interest (ROI) files were computed in order to estimate the corresponding projected area. Vines were then defoliated at fruiting zone, in two steps and new images were obtained before each step. Overall the area of bunches occluded by leaves achieved mean values between 67% and 73%, with Syrah presenting the larger variation. A polynomial regression was fitted between canopy porosity (independent variable) and percentage of bunches not occluded by leaves which showed significant R2 values of 0.83 and 0.82 for the Encruzado and Syrah varieties, respectively. Our results show that the fraction of non-visible bunches can be estimated indirectly using canopy porosity as explanatory variable, a trait that can be automatically obtained in the future using a laser range finder deployed on the mobile platforminfo:eu-repo/semantics/publishedVersio

    Development of a new non-invasive vineyard yield estimation method based on image analysis

    Get PDF
    Doutoramento em Engenharia Agronómica / Instituto Superior de Agronomia. Universidade de LisboaPredicting vineyard yield with accuracy can provide several advantages to the whole vine and wine industry. Today this is majorly done using manual and sometimes destructive methods, based on bunch samples. Yield estimation using computer vision and image analysis can potentially perform this task extensively, automatically, and non-invasively. In the present work this approach is explored in three main steps: image collection, occluded fruit estimation and image traits conversion to mass. On the first step, grapevine images were collected in field conditions along some of the main grapevine phenological stages. Visible yield components were identified in the image and compared to ground truth. When analyzing inflorescences and bunches, more than 50% were occluded by leaves or other plant organs, on three cultivars. No significant differences were observed on bunch visibility after fruit set. Visible bunch projected area explained an average of 49% of vine yield variation, between veraison and harvest. On the second step, vine images were collected, in field conditions, with different levels of defoliation intensity at bunch zone. A regression model was computed combining canopy porosity and visible bunch area, obtained via image analysis, which explained 70-84% of bunch exposure variation. This approach allowed for an estimation of the occluded fraction of bunches with average errors below |10|%. No significant differences were found between the model’s output at veraison and harvest. On the last step, the conversion of bunch image traits into mass was explored in laboratory and field conditions. In both cases, cultivar differences related to bunch architecture were found to affect weight estimation. A combination of derived variables which included visible bunch area, estimated total bunch area, visible bunch perimeter, visible berry number and bunch compactness was used to estimate yield on undisturbed grapevines. The final model achieved a R2 = 0.86 between actual and estimated yield (n = 213). If performed automatically, the final approach suggested in this work has the potential to provide a non-invasive method that can be performed accurately across whole vineyards.N/

    Fruit detection and 3D location using instance segmentation neural networks and structure-from-motion photogrammetry

    Get PDF
    The development of remote fruit detection systems able to identify and 3D locate fruits provides opportunities to improve the efficiency of agriculture management. Most of the current fruit detection systems are based on 2D image analysis. Although the use of 3D sensors is emerging, precise 3D fruit location is still a pending issue. This work presents a new methodology for fruit detection and 3D location consisting of: (1) 2D fruit detection and segmentation using Mask R-CNN instance segmentation neural network; (2) 3D point cloud generation of detected apples using structure-from-motion (SfM) photogrammetry; (3) projection of 2D image detections onto 3D space; (4) false positives removal using a trained support vector machine. This methodology was tested on 11 Fuji apple trees containing a total of 1455 apples. Results showed that, by combining instance segmentation with SfM the system performance increased from an F1-score of 0.816 (2D fruit detection) to 0.881 (3D fruit detection and location) with respect to the total amount of fruits. The main advantages of this methodology are the reduced number of false positives and the higher detection rate, while the main disadvantage is the high processing time required for SfM, which makes it presently unsuitable for real-time work. From these results, it can be concluded that the combination of instance segmentation and SfM provides high performance fruit detection with high 3D data precision. The dataset has been made publicly available and an interactive visualization of fruit detection results is accessible at http://www.grap.udl.cat/documents/photogrammetry_fruit_detection.html. Dades primàries associades a l'article http://hdl.handle.net/10459.1/68505This work was partly funded by the Secretaria d’Universitats i Recerca del Departament d’Empresa i Coneixement de la Generalitat de Catalunya (grant 2017 SGR646), the Spanish Ministry of Economy and Competitiveness (project AGL2013-48297-C2-2-R) and the Spanish Ministry of Science, Innovation and Universities (project RTI2018-094222-B-I00). Part of the work was also developed within the framework of the project TEC2016-75976-R, financed by the Spanish Ministry of Economy, Industry and Competitiveness and the European Regional Development Fund (ERDF). The Spanish Ministry of Educationis thanked for Mr. J.Gené’s pre-doctoral fellowships (FPU15/03355). We would also like to thank Nufri (especially Santiago Salamero and Oriol Morreres) and Vicens Maquinària Agrícola S.A. for their support during data acquisition, and Ernesto Membrillo and Roberto Maturino for their support in dataset labelling

    Computer vision applied to agriculture.

    Get PDF
    Introduction. Perception: pattern recognition in images: Identification of plant diseases; Detection of animals in pastures; Detection and counting of fruits. Three-dimensional mapping and reconstruction. Combination of structure and recognition. Performance and intervention: field robotics. Final considerations

    Overcoming the challenge of bunch occlusion by leaves for vineyard yield estimation using image analysis

    Get PDF
    Accurate yield estimation is of utmost importance for the entire grape and wine production chain, yet it remains an extremely challenging process due to high spatial and temporal variability in vineyards. Recent research has focused on using image analysis for vineyard yield estimation, with one of the major obstacles being the high degree of occlusion of bunches by leaves. This work uses canopy features obtained from 2D images (canopy porosity and visible bunch area) as proxies for estimating the proportion of occluded bunches by leaves to enable automatic yield estimation on non-disturbed canopies. Data was collected from three grapevine varieties, and images were captured from 1 m segments at two phenological stages (veraison and full maturation) in non-defoliated and partially defoliated vines. Visible bunches (bunch exposure; BE) varied between 16 and 64 %. This percentage was estimated using a multiple regression model that includes canopy porosity and visible bunch area as predictors, yielding a R2 between 0.70 and 0.84 on a training set composed of 70 % of all data, showing an explanatory power 10 to 43 % higher than when using the predictors individually. A model based on the combined data set (all varieties and phenological stages) was selected for BE estimation, achieving a R2 = 0.80 on the validation set. This model did not show validation metrics differences when applied on data collected at veraison or full maturation, suggesting that BE can be accurately estimated at any stage. Bunch exposure was then used to estimate total bunch area (tBA), showing low errors (< 10 %) except for the variety Arinto, which presents specific morphological traits such as large leaves and bunches. Finally, yield estimation computed from estimated tBA presented a very low error (0.2 %) on the validation data set with pooled data. However, when performed on every single variety, the simplified approach of area-to-mass conversion was less accurate for the variety Syrah. The method demonstrated in this work is an important step towards a fully automated non-invasive yield estimation approach, as it offers a solution to estimate bunches that are not visible to imaging sensorsinfo:eu-repo/semantics/publishedVersio

    Segmentation of field grape bunches via an improved pyramid scene parsing network

    Get PDF
    With the continuous expansion of wine grape planting areas, the mechanization and intelligence of grape harvesting have gradually become the future development trend. In order to guide the picking robot to pick grapes more efficiently in the vineyard, this study proposed a grape bunches segmentation method based on Pyramid Scene Parsing Network (PSPNet) deep semantic segmentation network for different varieties of grapes in the natural field environments. To this end, the Convolutional Block Attention Module (CBAM) attention mechanism and the atrous convolution were first embedded in the backbone feature extraction network of the PSPNet model to improve the feature extraction capability. Meanwhile, the proposed model also improved the PSPNet semantic segmentation model by fusing multiple feature layers (with more contextual information) extracted by the backbone network. The improved PSPNet was compared against the original PSPNet on a newly collected grape image dataset, and it was shown that the improved PSPNet model had an Intersection-over-Union (IoU) and Pixel Accuracy (PA) of 87.42% and 95.73%, respectively, implying an improvement of 4.36% and 9.95% over the original PSPNet model. The improved PSPNet was also compared against the state-of-the-art DeepLab-V3+ and U-Net in terms of IoU, PA, computation efficiency and robustness, and showed promising performance. It is concluded that the improved PSPNet can quickly and accurately segment grape bunches of different varieties in the natural field environments, which provides a certain technical basis for intelligent harvesting by grape picking robots

    Deep neural networks for grape bunch segmentation in natural images from a consumer-grade camera

    Get PDF
    AbstractPrecision agriculture relies on the availability of accurate knowledge of crop phenotypic traits at the sub-field level. While visual inspection by human experts has been traditionally adopted for phenotyping estimations, sensors mounted on field vehicles are becoming valuable tools to increase accuracy on a narrower scale and reduce execution time and labor costs, as well. In this respect, automated processing of sensor data for accurate and reliable fruit detection and characterization is a major research challenge, especially when data consist of low-quality natural images. This paper investigates the use of deep learning frameworks for automated segmentation of grape bunches in color images from a consumer-grade RGB-D camera, placed on-board an agricultural vehicle. A comparative study, based on the estimation of two image segmentation metrics, i.e. the segmentation accuracy and the well-known Intersection over Union (IoU), is presented to estimate the performance of four pre-trained network architectures, namely the AlexNet, the GoogLeNet, the VGG16, and the VGG19. Furthermore, a novel strategy aimed at improving the segmentation of bunch pixels is proposed. It is based on an optimal threshold selection of the bunch probability maps, as an alternative to the conventional minimization of cross-entropy loss of mutually exclusive classes. Results obtained in field tests show that the proposed strategy improves the mean segmentation accuracy of the four deep neural networks in a range between 2.10 and 8.04%. Besides, the comparative study of the four networks demonstrates that the best performance is achieved by the VGG19, which reaches a mean segmentation accuracy on the bunch class of 80.58%, with IoU values for the bunch class of 45.64%

    Potential phenotyping methodologies to assess inter- and intravarietal variability and to select grapevine genotypes tolerant to abiotic stress

    Get PDF
    ReviewPlant phenotyping is an emerging science that combines multiple methodologies and protocols to measure plant traits (e.g., growth, morphology, architecture, function, and composition) at multiple scales of organization. Manual phenotyping remains as a major bottleneck to the advance of plant and crop breeding. Such constraint fostered the development of high throughput plant phenotyping (HTPP), which is largely based on imaging approaches and automatized data retrieval and processing. Field phenotyping still poses major challenges and the progress of HTPP for field conditions can be relevant to support selection and breeding of grapevine. The aim of this review is to discuss potential and current methods to improve field phenotyping of grapevine to support characterization of inter- and intravarietal diversity. Vitis vinifera has a large genetic diversity that needs characterization, and the availability of methods to support selection of plant material (polyclonal or clonal) able to withstand abiotic stress is paramount. Besides being time consuming, complex and expensive, field experiments are also affected by heterogeneous and uncontrolled climate and soil conditions, mostly due to the large areas of the trials and to the high number of traits to be observed in a number of individuals ranging from hundreds to thousands. Therefore, adequate field experimental design and data gathering methodologies are crucial to obtain reliable data. Some of the major challenges posed to grapevine selection programs for tolerance to water and heat stress are described herein. Useful traits for selection and related field phenotyping methodologies are described and their adequacy for large scale screening is discussedinfo:eu-repo/semantics/publishedVersio

    Vineyard yield estimation using image analysis – a review

    Get PDF
    Mestrado em Engenharia de Viticultura e Enologia (Double degree) / Instituto Superior de Agronomia. Universidade de Lisboa / Faculdade de Ciências. Universidade do PortoYield estimation is one of the main goals of the wine industry, this because with an accurate yield estimation it is possible to have a significant reduction in production costs and a better management of the wine industry. Traditional methods for yield estimation are laborious and time consuming, for these reasons in the last years we are witnessing to the development of new methodologies, most of which are based on image analysis. Thanks to the continuous updating and improvement of the computer vision techniques and of the robotic platforms, image analysis applied to the yield estimation is becoming more and more efficient. In fact the results shown by the different studies are very satisfying, at least as regards the estimation of what is possible to see, while are under development several procedures which have the objective to estimate what is not possible to see, due to bunch occlusion by leaves and by others clusters. I this work the different methodologies and the different approaches used for yield estimation are described, including both traditional methods and new approaches based on image analysis, in order to present the advantages and disadvantages of each of themN/
    corecore