315 research outputs found

    Applications of Image Processing in Viticulture: A Review

    Get PDF
    The production of high quality grapes for wine making is challenging. Significant progress has been made in the automated prediction of harvest yields from images but the analysis of images to predict the quality of the harvest has yet to be fully addressed. The quality of wine produced depends in part on the quality of the grapes harvested and therefore on the presence of disease in the vineyard. There is potential for automated early detection of disease in grape crops through the development of accurate techniques for image processing. This paper presents a review of current research and highlights some of the key challenges for geo-computation (image processing, computer vision and data mining techniques) to inform the management of vineyards and highlights the key challenges for in-field image capture and analysis. An exploration of potential applications for the knowledge generated by imaging techniques is then presented. This discussion is driven by the current interest in the effect of rapid and dramatic climate change on the production of wine and focuses on how this information might be utilized to inform the design and validation of accurate predictive models

    Assessing Berry Number for Grapevine Yield Estimation by Image Analysis: Case Study with the Red Variety “Syrah”

    Get PDF
    Mestrado em Engenharia de Viticultura e Enologia (Double degree) / Instituto Superior de Agronomia. Universidade de Lisboa / Faculdade de Ciências. Universidade do PortoThe yield estimation provides information that help growers to make decisions in order to optimize crop growth and to organize the harvest operations in field and in the cellar. In most vineyard estates yield is forecasted using manual methods. However, image analysis methods, which are less invasive low cost and more representative are now being developed. The main objective of this work was to estimate yield through data obtained in the frame of Vinbot project during the 2019 season. In this thesis, images of the grapevine variety Syrah taken in the laboratory and in the vineyards of the “Instituto Superior de Agronomia” in Lisbon were analyzed. In the laboratory the images were taken manually with an RGB camera, while in the field vines were imaged either manually and by the Vinbot robot. From these images, the number of visible berries were counted with MATLAB. From the laboratory values, the relationships between the number of visible berries and actual bunch weight and berry number were studied. From the data obtained in the field, it was analyzed the visibility of the berries at different levels of defoliation and the relationship between the area of visible bunches and the visible berries. Berry-by-berry occlusion showed a value of 6.4% at pea-size, 14.5% at veraison and 25% at maturation. In addition, high and significant determination coefficient were obtained between actual yield and visible berries. The comparison of estimated yield, obtained using the regression models with actual yield, showed an underestimation at all the three phonological stages. This low accuracy of the developed models show that the use of algorithms based on visible berry number on the images to estimate yield still needs further researchN/

    Automated early plant disease detection and grading system: Development and implementation

    Get PDF
    As the agriculture industry grows, many attempts have been made to ensure high quality of produce. Diseases and defects found in plants and crops, affect the agriculture industry greatly. Hence, many techniques and technologies have been developed to help solving or reducing the impact of plant diseases. Imagining analysis tools, and gas sensors are becoming more frequently integrated into smart systems for plant disease detection. Many disease detection systems incorporate imaging analysis tools and Volatile Organic Compound (VOC) profiling techniques to detect early symptoms of diseases and defects of plants, fruits and vegetative produce. These disease detection techniques can be further categorized into two main groups; preharvest disease detection and postharvest disease detection techniques. This thesis aims to introduce the available disease detection techniques and to compare it with the latest innovative smart systems that feature visible imaging, hyperspectral imaging, and VOC profiling. In addition, this thesis incorporates the use of image analysis tools and k-means segmentation to implement a preharvest Offline and Online disease detection system. The Offline system to be used by pathologists and agriculturists to measure plant leaf disease severity levels. K-means segmentation and triangle thresholding techniques are used together to achieve good background segmentation of leaf images. Moreover, a Mamdani-Type Fuzzy Logic classification technique is used to accurately categorize leaf disease severity level. Leaf images taken from a real field with varying resolutions were tested using the implemented system to observe its effect on disease grade classification. Background segmentation using k-means clustering and triangle thresholding proved to be effective, even in non-uniform lighting conditions. Integration of a Fuzzy Logic system for leaf disease severity level classification yielded in classification accuracies of 98%. Furthermore, a robot is designed and implemented as a robotized Online system to provide field based analysis of plant health using visible and near infrared spectroscopy. Fusion of visible and near infrared images are used to calculate the Normalized Deference Vegetative Index (NDVI) to measure and monitor plant health. The robot is designed to have the functionality of moving across a specified path within an agriculture field and provide health information of leaves as well as position data. The system was tested in a tomato greenhouse under real field conditions. The developed system proved effective in accurately classifying plant health into one of 3 classes; underdeveloped, unhealthy, and healthy with an accuracy of 83%. A map with plant health and locations is produced for farmers and agriculturists to monitor the plant health across different areas. This system has the capability of providing early vital health analysis of plants for immediate action and possible selective pesticide spraying

    A Review of the Challenges of Using Deep Learning Algorithms to Support Decision-Making in Agricultural Activities

    Get PDF
    Deep Learning has been successfully applied to image recognition, speech recognition, and natural language processing in recent years. Therefore, there has been an incentive to apply it in other fields as well. The field of agriculture is one of the most important fields in which the application of deep learning still needs to be explored, as it has a direct impact on human well-being. In particular, there is a need to explore how deep learning models can be used as a tool for optimal planting, land use, yield improvement, production/disease/pest control, and other activities. The vast amount of data received from sensors in smart farms makes it possible to use deep learning as a model for decision-making in this field. In agriculture, no two environments are exactly alike, which makes testing, validating, and successfully implementing such technologies much more complex than in most other industries. This paper reviews some recent scientific developments in the field of deep learning that have been applied to agriculture, and highlights some challenges and potential solutions using deep learning algorithms in agriculture. The results in this paper indicate that by employing new methods from deep learning, higher performance in terms of accuracy and lower inference time can be achieved, and the models can be made useful in real-world applications. Finally, some opportunities for future research in this area are suggested.This work is supported by the R&D Project BioDAgro—Sistema operacional inteligente de informação e suporte á decisão em AgroBiodiversidade, project PD20-00011, promoted by Fundação La Caixa and Fundação para a Ciência e a Tecnologia, taking place at the C-MAST-Centre for Mechanical and Aerospace Sciences and Technology, Department of Electromechanical Engineering of the University of Beira Interior, Covilhã, Portugal.info:eu-repo/semantics/publishedVersio

    Monitorización 3D de cultivos y cartografía de malas hierbas mediante vehículos aéreos no tripulados para un uso sostenible de fitosanitarios

    Get PDF
    En esta Tesis Doctoral se han utilizado las imágenes procedentes de un UAV para abordar la sostenibilidad de la aplicación de productos fitosanitarios mediante la generación de mapas que permitan su aplicación localizada. Se han desarrollado dos formas diferentes y complementarias para lograr este objetivo: 1) la reducción de la aplicación de herbicidas en post-emergencia temprana mediante el diseño de tratamientos dirigidos a las zonas infestadas por malas hierbas en varios cultivos herbáceos; y 2) la caracterización tridimensional (arquitectura y volumen) de cultivos leñosos para el diseño de tratamientos de aplicación localizada de fitosanitarios dirigidos a la parte aérea de los mismos. Para afrontar el control localizado de herbicidas se han estudiado la configuración y las especificaciones técnicas de un UAV y de los sensores embarcados a bordo para su aplicación en la detección temprana de malas hierbas y contribuir a la generación de mapas para un control localizado en tres cultivos herbáceos: maíz, trigo y girasol. A continuación, se evaluaron los índices espectrales más precisos para su uso en la discriminación de suelo desnudo y vegetación (cultivo y malas hierbas) en imágenes-UAV tomadas sobre dichos cultivos en fase temprana. Con el fin de automatizar dicha discriminación se implementó en un entorno OBIA un método de cálculo de umbrales. Finalmente, se desarrolló una metodología OBIA automática y robusta para la discriminación de cultivo, suelo desnudo y malas hierbas en los tres cultivos estudiados, y se evaluó la influencia sobre su funcionamiento de distintos parámetros relacionados con la toma de imágenes UAV (solape, tipo de sensor, altitud de vuelo, momento de programación de los vuelos, entre otros). Por otra parte y para facilitar el diseño de tratamientos fitosanitarios ajustados a las necesidades de los cultivos leñosos se ha desarrollado una metodología OBIA automática y robusta para la caracterización tridimensional (arquitectura y volumen) de cultivos leñosos usando imágenes y modelos digitales de superficies generados a partir de imágenes procedentes de un UAV. Asimismo, se evaluó la influencia de distintos parámetros relacionados con la toma de las imágenes (solape, tipo de sensor, altitud de vuelo) sobre el funcionamiento del algoritmo OBIA diseñado

    Detection and segmentation of vine canopy in ultra-high spatial resolution RGB imagery obtained from unmanned aerial vehicle (UAV): a case study in a commercial vineyard

    Get PDF
    The use of Unmanned Aerial Vehicles (UAVs) in viticulture permits the capture of aerial Red-Green-Blue (RGB) images with an ultra-high spatial resolution. Recent studies have demonstrated that RGB images can be used to monitor spatial variability of vine biophysical parameters. However, for estimating these parameters, accurate and automated segmentation methods are required to extract relevant information from RGB images. Manual segmentation of aerial images is a laborious and time-consuming process. Traditional classification methods have shown satisfactory results in the segmentation of RGB images for diverse applications and surfaces, however, in the case of commercial vineyards, it is necessary to consider some particularities inherent to canopy size in the vertical trellis systems (VSP) such as shadow effect and different soil conditions in inter-rows (mixed information of soil and weeds). Therefore, the objective of this study was to compare the performance of four classification methods (K-means, Artificial Neural Networks (ANN), Random Forest (RForest) and Spectral Indices (SI)) to detect canopy in a vineyard trained on VSP. Six flights were carried out from post-flowering to harvest in a commercial vineyard cv. Carménère using a low-cost UAV equipped with a conventional RGB camera. The results show that the ANN and the simple SI method complemented with the Otsu method for thresholding presented the best performance for the detection of the vine canopy with high overall accuracy values for all study days. Spectral indices presented the best performance in the detection of Plant class (Vine canopy) with an overall accuracy of around 0.99. However, considering the performance pixel by pixel, the Spectral indices are not able to discriminate between Soil and Shadow class. The best performance in the classification of three classes (Plant, Soil, and Shadow) of vineyard RGB images, was obtained when the SI values were used as input data in trained methods (ANN and RForest), reaching overall accuracy values around 0.98 with high sensitivity values for the three classes

    Grapevine yield estimation using image analysis for the variety Arinto

    Get PDF
    Mestrado em Engenharia de Viticultura e Enologia (Double Degree) / Instituto Superior de Agronomia. Universidade de Lisboa / Faculdade de Ciências. Universidade do PortoYield estimation can lead to difficulties in the vineyard and winery, if it is done inaccurately following wrong procedures, doing a non-representative sampling or for the human error. Moreover, the traditional yield estimation methods are time consuming and destructive because they need someone that goes into the vineyard to count the yield components and that take out from the vineyard inflorescence or bunches to count and weight the flowers and the berries. To avoid these problems and the errors that can occur on this way, the development and application of new and innovative techniques to estimate the yield through the analysis of RGB images taken under field conditions are under study from different groups of research. In our research work we’ve studied the application of counting the yield components in the images throughout all the growing season. Furthermore, we’ve studied two different algorithms that starting from the survey of canopy porosity and/or visible bunches area, can help to do an estimation of the yield. The most promising yield estimation, based on the counting of the yield components done through image analysis, was found to be at the phenological stage of four leaves out, which shown a mean absolute percent error (MA%E) of 32 ± 2% and an correlaion coefficient (r Obs,Est) between observed and estimated shoots of 0.62. The two algorithms used different models: for estimating the area of the bunches covered by leaves and to estimate the weight of the bunches per linear canopy meter. When the area of the bunches without leaf occlusion was estimated, an average percentage of occlusion generated by the bunches on the other bunches of 8%, 6% and 12% respectively at pea size, veraison and maturation, was used to estimate the total area of the bunches. When the total area of the bunches per linear canopy meter was estimated the two models to estimate the grape weight were used. Finally, to estimate the weight at harvest, the growth factors of 6.6 and 1.7 respectively, at pea size and veraison were used. The first algorithm shown a MA%E, between the estimated and observed values of yield, of - 33.59%, -9.24% and -11.25%, instead the second algorithm shown a MA%E of -6.81%, -1.35% and 0.01% respectively at pea-size, veraison and maturationN/
    corecore