30 research outputs found

    Grapevine yield prediction using image analysis - improving the estimation of non-visible bunches

    Get PDF
    Yield forecast is an issue of utmost importance for the entire grape and wine sectors. There are several methods for vineyard yield estimation. The ones based on estimating yield components are the most commonly used in commercial vineyards. Those methods are generally destructive and very labor intensive and can provide inaccurate results as they are based on the assessment of a small sample of bunches. Recently, several attempts have been made to apply image analysis technologies for bunch and/or berries recognition in digital images. Nonetheless, the effectiveness of image analysis in predicting yield is strongly dependent of grape bunch visibility, which is dependent on canopy density at fruiting zone and on bunch number, density and dimensions. In this work data on bunch occlusion obtained in a field experiment is presented. This work is set-up in the frame of a research project aimed at the development of an unmanned ground vehicle to scout vineyards for non-intrusive estimation of canopy features and grape yield. The objective is to evaluate the use of explanatory variables to estimate the fraction of non-visible bunches (bunches occluded by leaves). In the future, this estimation can potentially improve the accuracy of a computer vision algorithm used by the robot to estimate total yield. In two vineyard plots with Encruzado (white) and Syrah (red) varieties, several canopy segments of 1 meter length were photographed with a RGB camera and a blue background, close to harvest date. Out of these images, canopy gaps (porosity) and bunches’ region of interest (ROI) files were computed in order to estimate the corresponding projected area. Vines were then defoliated at fruiting zone, in two steps and new images were obtained before each step. Overall the area of bunches occluded by leaves achieved mean values between 67% and 73%, with Syrah presenting the larger variation. A polynomial regression was fitted between canopy porosity (independent variable) and percentage of bunches not occluded by leaves which showed significant R2 values of 0.83 and 0.82 for the Encruzado and Syrah varieties, respectively. Our results show that the fraction of non-visible bunches can be estimated indirectly using canopy porosity as explanatory variable, a trait that can be automatically obtained in the future using a laser range finder deployed on the mobile platforminfo:eu-repo/semantics/publishedVersio

    Digitasi Produktivitas Panen Padi Berbasis K-Means Clustering

    Get PDF
    Paddy as an ingredient of staple food by people in Indonesia includes in East Java Province. Therefore attention to the production of paddy in East Java is necessary, and this attention will give a piece of knowledge about which region produces paddy optimally or less optimal. This study aim is to do a clustering about paddy production in each region in East Java. K-Means algorithm uses to do clustering. The result is 3 clusters obtained, high, medium, and less productivity cluster. There are six regions in high productivity cluster, 20 regions in medium productivity cluster, and 12 regions in less productivity cluster.Padi merupakan bahan makanan pokok masyarakat di wilayah negara Indonesia termasuk di wilayah Provinsi Jawa Timur. Oleh karena itu hasil produksi padi perlu diperhatikan untuk mengetahui daerah yang memiliki produksi yang kurang optimal atau yang optimal. Pada penelitian ini bertujuan untuk melakukan clustering produksi padi di daerah Jawa Timur. Clustering dilakukan dengan algoritma K-Means sebanyak 5 iterasi yang menghasilkan 3 klaster yaitu klaster daerah produksi tinggi, daerah produksi sedang, dan daerah produksi rendah. Klaster daerah produksi tinggi terdapat 6 daerah, klister produksi sedang terdapat 20 daerah, dan klister produksi rendah terdapat 12 daerah

    Development of a new non-invasive vineyard yield estimation method based on image analysis

    Get PDF
    Doutoramento em Engenharia Agronómica / Instituto Superior de Agronomia. Universidade de LisboaPredicting vineyard yield with accuracy can provide several advantages to the whole vine and wine industry. Today this is majorly done using manual and sometimes destructive methods, based on bunch samples. Yield estimation using computer vision and image analysis can potentially perform this task extensively, automatically, and non-invasively. In the present work this approach is explored in three main steps: image collection, occluded fruit estimation and image traits conversion to mass. On the first step, grapevine images were collected in field conditions along some of the main grapevine phenological stages. Visible yield components were identified in the image and compared to ground truth. When analyzing inflorescences and bunches, more than 50% were occluded by leaves or other plant organs, on three cultivars. No significant differences were observed on bunch visibility after fruit set. Visible bunch projected area explained an average of 49% of vine yield variation, between veraison and harvest. On the second step, vine images were collected, in field conditions, with different levels of defoliation intensity at bunch zone. A regression model was computed combining canopy porosity and visible bunch area, obtained via image analysis, which explained 70-84% of bunch exposure variation. This approach allowed for an estimation of the occluded fraction of bunches with average errors below |10|%. No significant differences were found between the model’s output at veraison and harvest. On the last step, the conversion of bunch image traits into mass was explored in laboratory and field conditions. In both cases, cultivar differences related to bunch architecture were found to affect weight estimation. A combination of derived variables which included visible bunch area, estimated total bunch area, visible bunch perimeter, visible berry number and bunch compactness was used to estimate yield on undisturbed grapevines. The final model achieved a R2 = 0.86 between actual and estimated yield (n = 213). If performed automatically, the final approach suggested in this work has the potential to provide a non-invasive method that can be performed accurately across whole vineyards.N/

    Efficient identification, localization and quantification of grapevine inflorescences and flowers in unprepared field images using Fully Convolutional Networks

    Get PDF
    Yield and its prediction is one of the most important tasks in grapevine breeding purposes and vineyard management. Commonly, this trait is estimated manually right before harvest by extrapolation, which mostly is labor-intensive, destructive and inaccurate. In the present study an automated image-based workflow was developed for quantifying inflorescences and single flowers in unprepared field images of grapevines, i.e. no artificial background or light was applied. It is a novel approach for non-invasive, inexpensive and objective phenotyping with high-throughput.First, image regions depicting inflorescences were identified and localized. This was done by segmenting the images into the classes "inflorescence" and "non-inflorescence" using a Fully Convolutional Network (FCN). Efficient image segmentation hereby is the most challenging step regarding the small geometry and dense distribution of single flowers (several hundred single flowers per inflorescence), similar color of all plant organs in the fore- and background as well as the circumstance that only approximately 5 % of an image show inflorescences. The trained FCN achieved a mean Intersection Over Union (IOU) of 87.6 % on the test data set. Finally, single flowers were extracted from the "inflorescence"-areas using Circular Hough Transform. The flower extraction achieved a recall of 80.3 % and a precision of 70.7 % using the segmentation derived by the trained FCN model.Summarized, the presented approach is a promising strategy in order to predict yield potential automatically in the earliest stage of grapevine development which is applicable for objective monitoring and evaluations of breeding material, genetic repositories or commercial vineyards

    EVALUATING THE PERFORMANCE OF A SEMI-AUTOMATIC APPLE FRUIT DETECTION IN A HIGH-DENSITY ORCHARD SYSTEM USING LOW-COST DIGITAL RGB IMAGING SENSOR

    Get PDF
    This study investigates the potential use of close-range and low-cost terrestrial RGB imaging sensor for fruit detection in a high-density apple orchard of Fuji Suprema apple fruits (Malus domestica Borkh). The study area is a typical orchard located in a small holder farm in Santa Catarina’s Southern plateau (Brazil). Small holder farms in that state are responsible for more than 50% of Brazil’s apple fruit production. Traditional digital image processing approaches such as RGB color space conversion (e.g., rgb, HSV, CIE L*a*b*, OHTA[I1 , I2 , I3 ]) were applied over several terrestrial RGB images to highlight information presented in the original dataset. Band combinations (e.g., rgb-r, HSV-h, Lab-a, I”2 , I”3 ) were also generated as additional parameters (C1, C2 and C3) for the fruit detection. After, optimal image binarization and segmentation, parameters were chosen to detect the fruits efficiently and the results were compared to both visual and in-situ fruit counting. Results show that some bands and combinations allowed hits above 75%, of which the following variables stood out as good predictors: rgb-r, Lab-a, I”2 , I”3 , and the combinations C2 and C3. The best band combination resulted from the use of Lab-a band and have identical results of commission, omission, and accuracy, being 5%, 25% and 75%, respectively. Fruit detection rate for Lab-a showed a 0.73 coefficient of determination (R2 ), and fruit recognition accuracy rate showed 0.96 R2 . The proposed approach provides results with great applicability for small holder farms and may support local harvest prediction

    Olive-Fruit Variety Classification by Means of Image Processing and Convolutional Neural Networks

    Get PDF
    The automation of classifcation and grading of horticultural products attending to different features comprises a major challenge in food industry. Thus, focused on the olive sector, which boasts of a huge range of cultivars, it is proposed a methodology for olive-fruit variety classifcation, approaching it as an image classifcation problem. To that purpose, 2,800 fruits belonging to seven different olive varieties were photographed. After processing these initial captures by means of image processing techniques, the resulting set of images of individual fruits were used to train, and continuedly to externally validate, the implementations of six different Convolutional Neural Networks architectures. This, in order to compute the classifers with which perform the variety categorization of the fruits. Remarkable hit rates were obtained after testing the classifers on the corresponding external validation sets. Thus, it was yielded a top accuracy of 95.91% when using the Inception-ResnetV2 architecture. The results suggest that the proposed methodology, once integrated into industrial conveyor belts, promises to be an advanced solution to postharvest olive-fruit processing and classifcation

    Identificación y conteo de aceitunas en imágenes digitales tomadas en el olivar mediante morfología matemática y redes neuronales convolucionales

    Get PDF
    La estimación precoz y precisa de la producción es un objetivo muy codiciado en la agricultura moderna. En el caso de la olivicultura, ello toma una especial relevancia debido al alto valor económico que alcanza su producción. Este artículo presenta una metodología enfocada a lograr dicho objetivo. Concretamente, se propone un algoritmo de visión artificial capaz de detectar las aceitunas visibles en una imagen digital de un árbol de olivo, tomada directamente en campo, de noche y con iluminación artificial. En primera instancia, esta imagen es preprocesada mediante técnicas de morfología matemática y filtrado estadístico para, a partir de ella, obtener un conjunto de subimágenes con alta probabilidad de contener una aceituna. Este preprocesamiento reduce el espacio potencial de búsqueda en una magnitud de 103. A continuación, estas subimágenes son clasificadas por una red neuronal convolucional como ‘aceituna’ o ‘descarte’. De un total de 304.483 subimágenes, extraídas de 21 imágenes, la red clasificó correctamente el 98,23%, y arrojó un coeficiente de determinación R2 igual a 0,9875, al enfrentar el número de aceitunas detectadas con el obtenido manualmente. Esta precisión alcanzada indica que el algoritmo desarrollado constituye un paso certero en la implementación de un futuro sistema de estimación de la producción de cultivos de olivo.Early and accurate yield estimation is a very valued objective for modern agriculture. In the case of oliviculture, it is especially relevant due to the high economic value of its production. This paper presents a methodology aimed at achieving that end. Concretely, it comprises an artificial vision algorithm able to detect those olives that are visible in a digital image of an olive tree, captured directly in the field, at night-time and with artificial illumination. First, the image is preprocessed by means of mathematical morphology techniques and statistical filtering to, from this output, generate a subset of images with high probability of containing an olive. Thus, this preprocessing reduces the search space in a magnitude of 103. Next, these subimages are classified by a convolutional neural network as ‘olive’ or ‘discarded’. From a total of 304,483 subimages, extracted from 21 images, the net correctly classified 98.23% of cases, and gave a coefficient of determination R2 of 0.9875 when facing the number of detected olives to the real one. This achieved accuracy indicates that the found algorithm constitutes a solid step towards the implementation of a future system for early yield estimation of olive orchard

    Enhanced faster region-based convolutional neural network for oil palm tree detection

    Get PDF
    Oil palm trees are important economic crops in Malaysia. One of the audit procedures is to count the number of oil palm trees for plantation management, which helps the manager predict the plantation yield and the amount of fertilizer and labor force needed. However, the current counting method for oil palm tree plantation is manually counting using GIS software, which is tedious and inefficient for large scale plantation. To overcome this problem, researchers proposed automatic counting methods based on machine learning and image processing. However, traditional machine learning and image processing methods used handcrafted feature extraction methods. It can only extract low-middle level features from the image and lack of generalization ability. It’s applicable only for one application and will need reprogramming for other applications. The widely used feature extraction methods are local binary patterns (LBP), scale-invariant feature transform (SIFT), and the histogram of oriented gradients (HOG), which usually achieve low accuracy because of their limited feature representation ability and without generalization capability. Hence, this research aims to close the research gaps by exploring the deep learning-based object detection algorithm and the classical convolutional neural network (CNN) to build an automatic deep learning-based oil palm tree detection and counting framework. This study proposed a new deep learning method based on Faster RCNN for oil palm tree detection and counting. To reduce the overfitting problem during the training, this study uses the image processing method to augment the training dataset by random flipping the image and to increase the data’s contrast and brightness. The transfer learning model of ResNet50 was used as the CNN backbone and the Faster RCNN network was retrained to get the weight for automatic oil palm tree counting. To improve the performance of Faster RCNN, feature concatation method was used to integrate the high-level and low-level feature from ResNet50. The proposed model validated the testing dataset of three palm tree regions with mature, young, and mixed mature and young palm trees. The detection results were compared with two machine learning methods of ANN, SVM, image processing-based TM method, and the original Faster RCNN model respectively. The proposed enhanced Faster RCNN model shows a promising result of oil palm tree detection and counting. It achieved an overall accuracy of 97% in the testing dataset, 97.2% in the mixed palm tree region, and 96.9% in the mature and young palm tree region, while the traditional ANN, SVM, and TM methods are less than 90%. The accuracy of comparison reveals that the proposed EFRCNN model outperforms the Faster RCNN and the traditional ANN, SVM, and TM methods. It has the potential to apply in counting a large area of oil palm tree plantation
    corecore