12 research outputs found

    A survey of image processing techniques for agriculture

    Get PDF
    Computer technologies have been shown to improve agricultural productivity in a number of ways. One technique which is emerging as a useful tool is image processing. This paper presents a short survey on using image processing techniques to assist researchers and farmers to improve agricultural practices. Image processing has been used to assist with precision agriculture practices, weed and herbicide technologies, monitoring plant growth and plant nutrition management. This paper highlights the future potential for image processing for different agricultural industry contexts

    An embedded real-time red peach detection system based on an OV7670 camera, ARM Cortex-M4 processor and 3D Look-Up Tables

    Get PDF
    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second

    Computer Vision and Machine Learning Based Grape Fruit Cluster Detection and Yield Estimation Robot

    Get PDF
    Estimation and detection of fruits plays a crucial role in harvesting. Traditionally, fruit growers rely on manual methods but nowadays they are facing problems of rapidly increasing labor costs and labour shortage. Earlier various techniques were developed using hyper spectral cameras, 3D images, clour based segmentation where it was difficult to find and distinguish grape bunches. In this research computer vision based novel approach is implemented using Open Source Computer Vision Library (OpenCV) and Random Forest machine learning algorithm for counting, detecting and segmentation of blue grape bunches. Here, fruit object segmentation is based on a binary threshold and Otsu method. For training and testing, classification based on pixel intensities were taken by a single image related to grape and non-grape fruit. The validation of developed technique represented by random forest algorithm achieved a good result with an accuracy score of 97.5% and F1-Score of 90.7% as compared to Support Vector Machine (SVM). The presented research pipeline for grape fruit bunch detection with noise removal, training, segmentation and classification techniques exhibit improved accuracy

    Deep neural networks for grape bunch segmentation in natural images from a consumer-grade camera

    Get PDF
    AbstractPrecision agriculture relies on the availability of accurate knowledge of crop phenotypic traits at the sub-field level. While visual inspection by human experts has been traditionally adopted for phenotyping estimations, sensors mounted on field vehicles are becoming valuable tools to increase accuracy on a narrower scale and reduce execution time and labor costs, as well. In this respect, automated processing of sensor data for accurate and reliable fruit detection and characterization is a major research challenge, especially when data consist of low-quality natural images. This paper investigates the use of deep learning frameworks for automated segmentation of grape bunches in color images from a consumer-grade RGB-D camera, placed on-board an agricultural vehicle. A comparative study, based on the estimation of two image segmentation metrics, i.e. the segmentation accuracy and the well-known Intersection over Union (IoU), is presented to estimate the performance of four pre-trained network architectures, namely the AlexNet, the GoogLeNet, the VGG16, and the VGG19. Furthermore, a novel strategy aimed at improving the segmentation of bunch pixels is proposed. It is based on an optimal threshold selection of the bunch probability maps, as an alternative to the conventional minimization of cross-entropy loss of mutually exclusive classes. Results obtained in field tests show that the proposed strategy improves the mean segmentation accuracy of the four deep neural networks in a range between 2.10 and 8.04%. Besides, the comparative study of the four networks demonstrates that the best performance is achieved by the VGG19, which reaches a mean segmentation accuracy on the bunch class of 80.58%, with IoU values for the bunch class of 45.64%

    Segmentation of field grape bunches via an improved pyramid scene parsing network

    Get PDF
    With the continuous expansion of wine grape planting areas, the mechanization and intelligence of grape harvesting have gradually become the future development trend. In order to guide the picking robot to pick grapes more efficiently in the vineyard, this study proposed a grape bunches segmentation method based on Pyramid Scene Parsing Network (PSPNet) deep semantic segmentation network for different varieties of grapes in the natural field environments. To this end, the Convolutional Block Attention Module (CBAM) attention mechanism and the atrous convolution were first embedded in the backbone feature extraction network of the PSPNet model to improve the feature extraction capability. Meanwhile, the proposed model also improved the PSPNet semantic segmentation model by fusing multiple feature layers (with more contextual information) extracted by the backbone network. The improved PSPNet was compared against the original PSPNet on a newly collected grape image dataset, and it was shown that the improved PSPNet model had an Intersection-over-Union (IoU) and Pixel Accuracy (PA) of 87.42% and 95.73%, respectively, implying an improvement of 4.36% and 9.95% over the original PSPNet model. The improved PSPNet was also compared against the state-of-the-art DeepLab-V3+ and U-Net in terms of IoU, PA, computation efficiency and robustness, and showed promising performance. It is concluded that the improved PSPNet can quickly and accurately segment grape bunches of different varieties in the natural field environments, which provides a certain technical basis for intelligent harvesting by grape picking robots

    Development of a new non-invasive vineyard yield estimation method based on image analysis

    Get PDF
    Doutoramento em Engenharia Agronómica / Instituto Superior de Agronomia. Universidade de LisboaPredicting vineyard yield with accuracy can provide several advantages to the whole vine and wine industry. Today this is majorly done using manual and sometimes destructive methods, based on bunch samples. Yield estimation using computer vision and image analysis can potentially perform this task extensively, automatically, and non-invasively. In the present work this approach is explored in three main steps: image collection, occluded fruit estimation and image traits conversion to mass. On the first step, grapevine images were collected in field conditions along some of the main grapevine phenological stages. Visible yield components were identified in the image and compared to ground truth. When analyzing inflorescences and bunches, more than 50% were occluded by leaves or other plant organs, on three cultivars. No significant differences were observed on bunch visibility after fruit set. Visible bunch projected area explained an average of 49% of vine yield variation, between veraison and harvest. On the second step, vine images were collected, in field conditions, with different levels of defoliation intensity at bunch zone. A regression model was computed combining canopy porosity and visible bunch area, obtained via image analysis, which explained 70-84% of bunch exposure variation. This approach allowed for an estimation of the occluded fraction of bunches with average errors below |10|%. No significant differences were found between the model’s output at veraison and harvest. On the last step, the conversion of bunch image traits into mass was explored in laboratory and field conditions. In both cases, cultivar differences related to bunch architecture were found to affect weight estimation. A combination of derived variables which included visible bunch area, estimated total bunch area, visible bunch perimeter, visible berry number and bunch compactness was used to estimate yield on undisturbed grapevines. The final model achieved a R2 = 0.86 between actual and estimated yield (n = 213). If performed automatically, the final approach suggested in this work has the potential to provide a non-invasive method that can be performed accurately across whole vineyards.N/

    High-throughput phenotyping of yield parameters for modern grapevine breeding

    Get PDF
    Weinbau wird auf 1% der deutschen AgrarflĂ€che betrieben. Auf dieser vergleichsweise kleinen AnbauflĂ€che wird jedoch ein Drittel aller in der deutschen Landwirtschaft verwendeten Fungizide appliziert, was auf die EinfĂŒhrung von Schaderregern im 19. Jahrhundert zurĂŒck zu fĂŒhren ist. FĂŒr einen nachhaltigen Anbau ist eine Reduktion des Pflanzenschutzmittelaufwands dringend notwendig. Dieses Ziel kann durch die ZĂŒchtung und den Anbau neuer, pilzwiderstandsfĂ€higer Rebsorten erreicht werden. Die RebenzĂŒchtung als solche ist sehr zeitaufwendig, da die Entwicklung neuer Rebsorten 20 bis 25 Jahre dauert. Der Einsatz der markergestĂŒtzten Selektion (MAS) erhöht die Effizienz der Selektion in der RebenzĂŒchtung fortwĂ€hrend. Eine weitere Effizienzsteigerung ist mit der andauernden Verbesserung der Hochdurchsatz Genotypisierung zu erwarten. Im Vergleich zu den Methoden der Genotypisierung ist die QualitĂ€t, ObjektivitĂ€t und PrĂ€zision der traditionellen PhĂ€notypisierungsmethoden begrenzt. Die Effizienz in der RebenzĂŒchtung soll mit der Entwicklung von Hochdurchsatz Methoden zur PhĂ€notypisierung durch sensorgestĂŒtzte Selektion weiter gesteigert werden. HierfĂŒr sind bisher vielfĂ€ltige Sensortechniken auf dem Markt verfĂŒgbar. Das Spektrum erstreckt sich von RGB-Kameras ĂŒber Multispektral-, Hyperspektral-, WĂ€rmebild- und Fluoreszenz- Kameras bis hin zu 3D-Techniken und Laserscananwendungen. Die PhĂ€notypisierung von Pflanzen kann unter kontrollierten Bedingungen in Klimakammern oder GewĂ€chshĂ€usern beziehungsweise im Freiland stattfinden. Die Möglichkeit einer standardisierten Datenaufnahme nimmt jedoch kontinuierlich ab. Bei der Rebe als Dauerkultur erfolgt die Aufnahme Ă€ußerer Merkmale, mit Ausnahme junger SĂ€mlinge, deshalb auch ĂŒberwiegend im Freiland. Variierende LichtverhĂ€ltnisse, Ähnlichkeit von Vorder- und Hintergrund sowie Verdeckung des Merkmals stellen aus methodischer Sicht die wichtigsten Herausforderungen in der sensorgestĂŒtzen Merkmalserfassung dar. Bis heute erfolgt die Aufnahme phĂ€notypischer Merkmale im Feld durch visuelle AbschĂ€tzung. Hierbei werden die BBCH Skala oder die OIV Deskriptoren verwendet. Limitierende Faktoren dieser Methoden sind Zeit, Kosten und die SubjektivitĂ€t bei der Datenerhebung. Innerhalb des ZĂŒchtungsprogramms kann daher nur ein reduziertes Set an Genotypen fĂŒr ausgewĂ€hlte Merkmale evaluiert werden. Die Automatisierung, PrĂ€zisierung und Objektivierung phĂ€notypischer Daten soll dazu fĂŒhren, dass (1) der bestehende Engpass an phĂ€notypischen Methoden verringert, (2) die Effizienz der RebenzĂŒchtung gesteigert, und (3) die Grundlage zukĂŒnftiger genetischer Studien verbessert wird, sowie (4) eine Optimierung des weinbaulichen Managements stattfindet. Stabile und ĂŒber die Jahre gleichbleibende ErtrĂ€ge sind fĂŒr eine Produktion qualitativ hochwertiger Weine notwendig und spielen daher eine SchlĂŒsselrolle in der RebenzĂŒchtung. Der Fokus dieser Studie liegt daher auf Ertragsmerkmalen wie der BeerengrĂ¶ĂŸe, Anzahl der Beeren pro Traube und Menge der Trauben pro Weinstock. Die verwandten Merkmale Traubenarchitektur und das VerhĂ€ltnis von generativem und vegetativem Wachstum wurden zusĂ€tzlich bearbeitet. Die Beurteilung von Ertragsmerkmalen auf Einzelstockniveau ist aufgrund der genotypischen Varianz und der VielfĂ€ltigkeit des betrachteten Merkmals komplex und zeitintensiv. Als erster Schritt in Richtung Hochdurchsatz (HT) PhĂ€notypisierung von Ertragsmerkmalen wurden zwei voll automatische Bildinterpretationsverfahren fĂŒr die Anwendung im Labor entwickelt. Das Cluster Analysis Tool (CAT) ermöglicht die bildgestĂŒtzte Erfassung der TraubenlĂ€nge, -breite und -kompaktheit, sowie der BeerengrĂ¶ĂŸe. Informationen ĂŒber Anzahl, GrĂ¶ĂŸe (LĂ€nge, Breite) und das Volumen der einzelnen Beeren liefert das Berry Analysis Tool (BAT). Beide Programme ermöglichen eine gleichzeitige Erhebung mehrerer, prĂ€ziser phĂ€notypischer Merkmale und sind dabei schnell, benutzerfreundlich und kostengĂŒnstig. Die Möglichkeit, den Vorder- und Hintergrund in einem Freilandbild zu unterscheiden, ist besonders in einem frĂŒhen Entwicklungsstadium der Rebe aufgrund der fehlenden Laubwand schwierig. Eine Möglichkeit, die beiden Ebenen in der Bildanalyse zu trennen, ist daher unerlĂ€sslich. Es wurde eine berĂŒhrungsfreie, schnelle sowie objektive Methode zur Bestimmung des Winterschnittholzgewichts, welches das vegetative Wachstum der Rebe beschreibt, entwickelt. In einem innovativen Ansatz wurde unter Kombination von Tiefenkarten und Bildsegmentierung die sichtbare WinterholzflĂ€che im Bild bestimmt. Im Zuge dieser Arbeit wurde die erste HT PhĂ€notypisierungspipeline fĂŒr die RebenzĂŒchtung aufgebaut. Sie umfasst die automatisierte Bildaufnahme im Freiland unter Einsatz des PHENObots, das Datenmanagement mit Datenanalyse sowie die Interpretation des erhaltenen phĂ€notypischen Datensatzes. Die Basis des PHENObots ist ein automatisiert gesteuertes Raupenfahrzeug. Des Weiteren umfasst er ein Multi-Kamera- System, ein RTK-GPS-System und einen Computer zur Datenspeicherung. Eine eigens entwickelte Software verbindet die Bilddaten mit der Standortreferenz. Diese Referenz wird anschließend fĂŒr das Datenmanagement in einer Datenbank verwendet. Um die FunktionalitĂ€t der PhĂ€notypisierungspipeline zu demonstrieren, wurden die Merkmale BeerengrĂ¶ĂŸe und -farbe im Rebsortiment des Geilweilerhofes unter Verwendung des Berries In Vineyard (BIVcolor) Programms erfasst. Im Durschnitt werden 20 Sekunden pro Weinstock fĂŒr die Bildaufnahme im Feld benötigt, gefolgt von der Extraktion der Merkmale mittels automatischer, objektiver und prĂ€ziser Bildauswertung. Im Zuge dieses Versuches konnten mit dem PHENObot 2700 Weinstöcke in 12 Stunden erfasst werden, gefolgt von einer automatischen Bestimmung der Merkmale BeerengrĂ¶ĂŸe und -farbe aus den Bildern. Damit konnte die grundsĂ€tzliche Machbarkeit bewiesen werden. Diese Pilotpipeline bietet nun die Möglichkeit zur Entwicklung weiterer innovativer Programme zur Erhebung neuer Merkmale sowie die Integration zusĂ€tzlicher Sensoren auf dem PHENObot.Grapevine is grown on about 1% of the German agricultural area requiring one third of all fungicides sprayed due to pathogens being introduced within the 19th century. In spite of this requirement for viticulture a reduction is necessary to improve sustainability. This objective can be achieved by growing fungus resistant grapevine cultivars. The development of new cultivars, however, is very time-consuming, taking 20 to 25 years. In recent years the breeding process could be increased considerably by using marker assisted selection (MAS). Further improvements of MAS applications in grapevine breeding will come along with developing of faster and more cost efficient high-throughput (HT) genotyping methods.Complementary to genotyping techniques the quality, objectivity and precision of current phenotyping methods is limited and HT phenotyping methods need to be developed to further increase the efficiency of grapevine breeding through sensor assisted selection. Many different types of sensors technologies are available ranging from visible light sensors (Red Green Blue (RGB) cameras), multispectral, hyperspectral, thermal, and fluorescence cameras to three dimensional (3D) camera and laser scan approaches. Phenotyping can either be done under controlled environments (growth chamber, greenhouse) or can take place in the field, with a decreasing level of standardization. Except for young seedlings, grapevine as a perennial plant needs ultimately to be screened in the field. From a methodological point of view a variety of challenges need to be considered like the variable light conditions, the similarity of fore- and background, and in the canopy hidden traits.The assessment of phenotypic data in grapevine breeding is traditionally done directly in the field by visual estimations. In general the BBCH scale is used to acquire and classify the stages of annual plant development or OIV descriptors are applied to assess the phenotypes into classes. Phenotyping is strongly limited by time, costs and the subjectivity of records. Therefore, only a comparably small set of genotypes is evaluated for certain traits within the breeding process. Due to that limitation, automation, precision and objectivity of phenotypic data evaluation is crucial in order to (1) reduce the existing phenotyping bottleneck, (2) increase the efficiency of grapevine breeding, (3) assist further genetic studies and (4) ensure improved vineyard management. In this theses emphasis was put on the following aspects: Balanced and stable yields are important to ensure a high quality wine production playing a key role in grapevine breeding. Therefore, the main focus of this study is on phenotyping different parameters of yield such as berry size, number of berries per cluster, and number of clusters per vine. Additionally, related traits like cluster architecture and vine balance (relation between vegetative and generative growth) were considered. Quantifying yield parameters on a single vine level is challenging. Complex shapes and slight variations between genotypes make it difficult and very time-consuming.As a first step towards HT phenotyping of yield parameters two fully automatic image interpretation tools have been developed for an application under controlled laboratory conditions to assess individual yield parameters. Using the Cluster Analysis Tool (CAT) four important phenotypic traits can be detected in one image: Cluster length, cluster width, berry size and cluster compactness. The utilization of the Berry Analysis Tool (BAT) provides information on number, size (length and width), and volume of grapevine berries. Both tools offer a fast, user-friendly and cheap procedure to provide several precise phenotypic features of berries and clusters at once with dimensional units in a shorter period of time compared to manual measurements.The similarity of fore- and background in an image captured under field conditions is especially difficult and crucial for image analysis at an early grapevine developmental stage due to the missing canopy. To detect the dormant pruning wood weight, partly determining vine balance, a fast and non-invasive tool for objective data acquisition in the field was developed. In an innovative approach it combines depth map calculation and image segmentation to subtract the background of the vine obtaining the pruning area visible in the image. For the implementation of HT field phenotyping in grapevine breeding a phenotyping pipeline has been set up. It ranges from the automated image acquisition directly in the field using the PHENObot, to data management, data analysis and the interpretation of obtained phenotypic data for grapevine breeding aims. The PHENObot consists of an automated guided tracked vehicle system, a calibrated multi camera system, a Real-Time-Kinematic GPS system and a computer for image data handling. Particularly developed software was applied in order to acquire geo referenced images directly in the vineyard. The geo-reference is afterwards used for the post-processing data management in a database. As phenotypic traits to be analysed within the phenotyping pipeline the detection of berries and the determination of the berry size and colour were considered. The highthroughput phenotyping pipeline was tested in the grapevine repository at Geilweilerhof to extract the characteristics of berry size and berry colour using the Berries In Vineyards (BIVcolor) tool. Image data acquisition took about 20 seconds per vine, which afterwards was followed by the automatic image analysis to extract objective and precise phenotypic data. In was possible to capture images of 2700 vines within 12 hours using the PHENObot and subsequently automatic analysis of the images and extracting berry size and berry colour. With this analysis proof of principle was demonstrated. The pilot pipeline providesthe basis for further development of additional evaluation modules as well as the integration of other sensors

    A machine learning-remote sensing framework for modelling water stress in Shiraz vineyards

    Get PDF
    Thesis (MA)--Stellenbosch University, 2018.ENGLISH ABSTRACT: Water is a limited natural resource and a major environmental constraint for crop production in viticulture. The unpredictability of rainfall patterns, combined with the potentially catastrophic effects of climate change, further compound water scarcity, presenting dire future scenarios of undersupplied irrigation systems. Major water shortages could lead to devastating loses in grape production, which would negatively affect job security and national income. It is, therefore, imperative to develop management schemes and farming practices that optimise water usage and safeguard grape production. Hyperspectral remote sensing techniques provide a solution for the monitoring of vineyard water status. Hyperspectral data, combined with the quantitative analysis of machine learning ensembles, enables the detection of water-stressed vines, thereby facilitating precision irrigation practices and ensuring quality crop yields. To this end, the thesis set out to develop a machine learning–remote sensing framework for modelling water stress in a Shiraz vineyard. The thesis comprises two components. Component one assesses the utility of terrestrial hyperspectral imagery and machine learning ensembles to detect water-stressed Shiraz vines. The Random Forest (RF) and Extreme Gradient Boosting (XGBoost) ensembles were employed to discriminate between water-stressed and non-stressed Shiraz vines. Results showed that both ensemble learners could effectively discriminate between water-stressed and non-stressed vines. When using all wavebands (p = 176), RF yielded a test accuracy of 83.3% (KHAT = 0.67), with XGBoost producing a test accuracy of 80.0% (KHAT = 0.6). Component two explores semi-automated feature selection approaches and hyperparameter value optimisation to improve the developed framework. The utility of the Kruskal-Wallis (KW) filter, Sequential Floating Forward Selection (SFFS) wrapper, and a Filter-Wrapper (FW) approach, was evaluated. When using optimised hyperparameter values, an increase in test accuracy ranging from 0.8% to 5.0% was observed for both RF and XGBoost. In general, RF was found to outperform XGBoost. In terms of predictive competency and computational efficiency, the developed FW approach was the most successful feature selection method implemented. The developed machine learning–remote sensing framework warrants further investigation to confirm its efficacy. However, the thesis answered key research questions, with the developed framework providing a point of departure for future studies.AFRIKAANSE OPSOMMING: Water is 'n beperkte natuurlike hulpbron en 'n groot omgewingsbeperking vir gewasproduksie in wingerdkunde. Die onvoorspelbaarheid van reĂ«nvalpatrone, gekombineer met die potensiĂ«le katastrofiese gevolge van klimaatsverandering, voorspel ‘n toekoms van water tekorte vir besproeiingstelsels. Groot water tekorte kan lei tot groot verliese in druiweproduksie, wat 'n negatiewe uitwerking op werksekuriteit en nasionale inkomste sal hĂȘ. Dit is dus noodsaaklik om bestuurskemas en boerderypraktyke te ontwikkel wat die gebruik van water optimaliseer en druiweproduksie beskerm. Hyperspectrale afstandswaarnemingstegnieke bied 'n oplossing vir die monitering van wingerd water status. Hiperspektrale data, gekombineer met die kwantitatiewe analise van masjienleer klassifikasies, fasiliteer die opsporing van watergestresde wingerdstokke. Sodoende verseker dit presiese besproeiings praktyke en kwaliteit gewasopbrengs. Vir hierdie doel het die tesis probeer 'n masjienleer-afstandswaarnemings raamwerk ontwikkel vir die modellering van waterstres in 'n Shiraz-wingerd. Die tesis bestaan uit twee komponente. Komponent 1 het die nut van terrestriĂ«le hiperspektrale beelde en masjienleer klassifikasies gebruik om watergestresde Shiraz-wingerde op te spoor. Die Ewekansige Woud (RF) en Ekstreme GradiĂ«nt Bevordering (XGBoost) algoritme was gebruik om te onderskei tussen watergestresde en nie-gestresde Shiraz-wingerde. Resultate het getoon dat beide RF en XGBoost effektief kan diskrimineer tussen watergestresde en nie-gestresde wingerdstokke. Met die gebruik van alle golfbande (p = 176) het RF 'n toets akkuraatheid van 83.3% (KHAT = 0.67) behaal en XGBoost het 'n toets akkuraatheid van 80.0% (KHAT = 0.6) gelewer. Komponent twee het die gebruik van semi-outomatiese veranderlike seleksie benaderings en hiperparameter waarde optimalisering ondersoek om die ontwikkelde raamwerk te verbeter. Die nut van die Kruskal-Wallis (KW) filter, sekwensiĂ«le drywende voorkoms seleksie (SFFS) wrapper en 'n Filter-Wrapper (FW) benadering is geĂ«valueer. Die gebruik van optimaliseerde hiperparameter waardes het gelei tot 'n toename in toets akkuraatheid (van 0.8% tot 5.0%) vir beide RF en XGBoost. In die algeheel het RF beter presteer as XGBoost. In terme van voorspellende bevoegdheid en berekenings doeltreffendheid was die ontwikkelde FW benadering die mees suksesvolle veranderlike seleksie metode. Die ontwikkelde masjienleer-afstandwaarnemende raamwerk benodig verder navorsing om sy doeltreffendheid te bevestig. Die tesis het egter sleutelnavorsingsvrae beantwoord, met die ontwikkelde raamwerk wat 'n vertrekpunt vir toekomstige studies verskaf.Master
    corecore