70 research outputs found

    The Use of Agricultural Robots in Orchard Management

    Full text link
    Book chapter that summarizes recent research on agricultural robotics in orchard management, including Robotic pruning, Robotic thinning, Robotic spraying, Robotic harvesting, Robotic fruit transportation, and future trends.Comment: 22 page

    Grapevine yield estimation using image analysis for the variety Arinto

    Get PDF
    Mestrado em Engenharia de Viticultura e Enologia (Double Degree) / Instituto Superior de Agronomia. Universidade de Lisboa / Faculdade de Ciências. Universidade do PortoYield estimation can lead to difficulties in the vineyard and winery, if it is done inaccurately following wrong procedures, doing a non-representative sampling or for the human error. Moreover, the traditional yield estimation methods are time consuming and destructive because they need someone that goes into the vineyard to count the yield components and that take out from the vineyard inflorescence or bunches to count and weight the flowers and the berries. To avoid these problems and the errors that can occur on this way, the development and application of new and innovative techniques to estimate the yield through the analysis of RGB images taken under field conditions are under study from different groups of research. In our research work we’ve studied the application of counting the yield components in the images throughout all the growing season. Furthermore, we’ve studied two different algorithms that starting from the survey of canopy porosity and/or visible bunches area, can help to do an estimation of the yield. The most promising yield estimation, based on the counting of the yield components done through image analysis, was found to be at the phenological stage of four leaves out, which shown a mean absolute percent error (MA%E) of 32 ± 2% and an correlaion coefficient (r Obs,Est) between observed and estimated shoots of 0.62. The two algorithms used different models: for estimating the area of the bunches covered by leaves and to estimate the weight of the bunches per linear canopy meter. When the area of the bunches without leaf occlusion was estimated, an average percentage of occlusion generated by the bunches on the other bunches of 8%, 6% and 12% respectively at pea size, veraison and maturation, was used to estimate the total area of the bunches. When the total area of the bunches per linear canopy meter was estimated the two models to estimate the grape weight were used. Finally, to estimate the weight at harvest, the growth factors of 6.6 and 1.7 respectively, at pea size and veraison were used. The first algorithm shown a MA%E, between the estimated and observed values of yield, of - 33.59%, -9.24% and -11.25%, instead the second algorithm shown a MA%E of -6.81%, -1.35% and 0.01% respectively at pea-size, veraison and maturationN/

    High-throughput phenotyping of yield parameters for modern grapevine breeding

    Get PDF
    Weinbau wird auf 1% der deutschen Agrarfläche betrieben. Auf dieser vergleichsweise kleinen Anbaufläche wird jedoch ein Drittel aller in der deutschen Landwirtschaft verwendeten Fungizide appliziert, was auf die Einführung von Schaderregern im 19. Jahrhundert zurück zu führen ist. Für einen nachhaltigen Anbau ist eine Reduktion des Pflanzenschutzmittelaufwands dringend notwendig. Dieses Ziel kann durch die Züchtung und den Anbau neuer, pilzwiderstandsfähiger Rebsorten erreicht werden. Die Rebenzüchtung als solche ist sehr zeitaufwendig, da die Entwicklung neuer Rebsorten 20 bis 25 Jahre dauert. Der Einsatz der markergestützten Selektion (MAS) erhöht die Effizienz der Selektion in der Rebenzüchtung fortwährend. Eine weitere Effizienzsteigerung ist mit der andauernden Verbesserung der Hochdurchsatz Genotypisierung zu erwarten. Im Vergleich zu den Methoden der Genotypisierung ist die Qualität, Objektivität und Präzision der traditionellen Phänotypisierungsmethoden begrenzt. Die Effizienz in der Rebenzüchtung soll mit der Entwicklung von Hochdurchsatz Methoden zur Phänotypisierung durch sensorgestützte Selektion weiter gesteigert werden. Hierfür sind bisher vielfältige Sensortechniken auf dem Markt verfügbar. Das Spektrum erstreckt sich von RGB-Kameras über Multispektral-, Hyperspektral-, Wärmebild- und Fluoreszenz- Kameras bis hin zu 3D-Techniken und Laserscananwendungen. Die Phänotypisierung von Pflanzen kann unter kontrollierten Bedingungen in Klimakammern oder Gewächshäusern beziehungsweise im Freiland stattfinden. Die Möglichkeit einer standardisierten Datenaufnahme nimmt jedoch kontinuierlich ab. Bei der Rebe als Dauerkultur erfolgt die Aufnahme äußerer Merkmale, mit Ausnahme junger Sämlinge, deshalb auch überwiegend im Freiland. Variierende Lichtverhältnisse, Ähnlichkeit von Vorder- und Hintergrund sowie Verdeckung des Merkmals stellen aus methodischer Sicht die wichtigsten Herausforderungen in der sensorgestützen Merkmalserfassung dar. Bis heute erfolgt die Aufnahme phänotypischer Merkmale im Feld durch visuelle Abschätzung. Hierbei werden die BBCH Skala oder die OIV Deskriptoren verwendet. Limitierende Faktoren dieser Methoden sind Zeit, Kosten und die Subjektivität bei der Datenerhebung. Innerhalb des Züchtungsprogramms kann daher nur ein reduziertes Set an Genotypen für ausgewählte Merkmale evaluiert werden. Die Automatisierung, Präzisierung und Objektivierung phänotypischer Daten soll dazu führen, dass (1) der bestehende Engpass an phänotypischen Methoden verringert, (2) die Effizienz der Rebenzüchtung gesteigert, und (3) die Grundlage zukünftiger genetischer Studien verbessert wird, sowie (4) eine Optimierung des weinbaulichen Managements stattfindet. Stabile und über die Jahre gleichbleibende Erträge sind für eine Produktion qualitativ hochwertiger Weine notwendig und spielen daher eine Schlüsselrolle in der Rebenzüchtung. Der Fokus dieser Studie liegt daher auf Ertragsmerkmalen wie der Beerengröße, Anzahl der Beeren pro Traube und Menge der Trauben pro Weinstock. Die verwandten Merkmale Traubenarchitektur und das Verhältnis von generativem und vegetativem Wachstum wurden zusätzlich bearbeitet. Die Beurteilung von Ertragsmerkmalen auf Einzelstockniveau ist aufgrund der genotypischen Varianz und der Vielfältigkeit des betrachteten Merkmals komplex und zeitintensiv. Als erster Schritt in Richtung Hochdurchsatz (HT) Phänotypisierung von Ertragsmerkmalen wurden zwei voll automatische Bildinterpretationsverfahren für die Anwendung im Labor entwickelt. Das Cluster Analysis Tool (CAT) ermöglicht die bildgestützte Erfassung der Traubenlänge, -breite und -kompaktheit, sowie der Beerengröße. Informationen über Anzahl, Größe (Länge, Breite) und das Volumen der einzelnen Beeren liefert das Berry Analysis Tool (BAT). Beide Programme ermöglichen eine gleichzeitige Erhebung mehrerer, präziser phänotypischer Merkmale und sind dabei schnell, benutzerfreundlich und kostengünstig. Die Möglichkeit, den Vorder- und Hintergrund in einem Freilandbild zu unterscheiden, ist besonders in einem frühen Entwicklungsstadium der Rebe aufgrund der fehlenden Laubwand schwierig. Eine Möglichkeit, die beiden Ebenen in der Bildanalyse zu trennen, ist daher unerlässlich. Es wurde eine berührungsfreie, schnelle sowie objektive Methode zur Bestimmung des Winterschnittholzgewichts, welches das vegetative Wachstum der Rebe beschreibt, entwickelt. In einem innovativen Ansatz wurde unter Kombination von Tiefenkarten und Bildsegmentierung die sichtbare Winterholzfläche im Bild bestimmt. Im Zuge dieser Arbeit wurde die erste HT Phänotypisierungspipeline für die Rebenzüchtung aufgebaut. Sie umfasst die automatisierte Bildaufnahme im Freiland unter Einsatz des PHENObots, das Datenmanagement mit Datenanalyse sowie die Interpretation des erhaltenen phänotypischen Datensatzes. Die Basis des PHENObots ist ein automatisiert gesteuertes Raupenfahrzeug. Des Weiteren umfasst er ein Multi-Kamera- System, ein RTK-GPS-System und einen Computer zur Datenspeicherung. Eine eigens entwickelte Software verbindet die Bilddaten mit der Standortreferenz. Diese Referenz wird anschließend für das Datenmanagement in einer Datenbank verwendet. Um die Funktionalität der Phänotypisierungspipeline zu demonstrieren, wurden die Merkmale Beerengröße und -farbe im Rebsortiment des Geilweilerhofes unter Verwendung des Berries In Vineyard (BIVcolor) Programms erfasst. Im Durschnitt werden 20 Sekunden pro Weinstock für die Bildaufnahme im Feld benötigt, gefolgt von der Extraktion der Merkmale mittels automatischer, objektiver und präziser Bildauswertung. Im Zuge dieses Versuches konnten mit dem PHENObot 2700 Weinstöcke in 12 Stunden erfasst werden, gefolgt von einer automatischen Bestimmung der Merkmale Beerengröße und -farbe aus den Bildern. Damit konnte die grundsätzliche Machbarkeit bewiesen werden. Diese Pilotpipeline bietet nun die Möglichkeit zur Entwicklung weiterer innovativer Programme zur Erhebung neuer Merkmale sowie die Integration zusätzlicher Sensoren auf dem PHENObot.Grapevine is grown on about 1% of the German agricultural area requiring one third of all fungicides sprayed due to pathogens being introduced within the 19th century. In spite of this requirement for viticulture a reduction is necessary to improve sustainability. This objective can be achieved by growing fungus resistant grapevine cultivars. The development of new cultivars, however, is very time-consuming, taking 20 to 25 years. In recent years the breeding process could be increased considerably by using marker assisted selection (MAS). Further improvements of MAS applications in grapevine breeding will come along with developing of faster and more cost efficient high-throughput (HT) genotyping methods.Complementary to genotyping techniques the quality, objectivity and precision of current phenotyping methods is limited and HT phenotyping methods need to be developed to further increase the efficiency of grapevine breeding through sensor assisted selection. Many different types of sensors technologies are available ranging from visible light sensors (Red Green Blue (RGB) cameras), multispectral, hyperspectral, thermal, and fluorescence cameras to three dimensional (3D) camera and laser scan approaches. Phenotyping can either be done under controlled environments (growth chamber, greenhouse) or can take place in the field, with a decreasing level of standardization. Except for young seedlings, grapevine as a perennial plant needs ultimately to be screened in the field. From a methodological point of view a variety of challenges need to be considered like the variable light conditions, the similarity of fore- and background, and in the canopy hidden traits.The assessment of phenotypic data in grapevine breeding is traditionally done directly in the field by visual estimations. In general the BBCH scale is used to acquire and classify the stages of annual plant development or OIV descriptors are applied to assess the phenotypes into classes. Phenotyping is strongly limited by time, costs and the subjectivity of records. Therefore, only a comparably small set of genotypes is evaluated for certain traits within the breeding process. Due to that limitation, automation, precision and objectivity of phenotypic data evaluation is crucial in order to (1) reduce the existing phenotyping bottleneck, (2) increase the efficiency of grapevine breeding, (3) assist further genetic studies and (4) ensure improved vineyard management. In this theses emphasis was put on the following aspects: Balanced and stable yields are important to ensure a high quality wine production playing a key role in grapevine breeding. Therefore, the main focus of this study is on phenotyping different parameters of yield such as berry size, number of berries per cluster, and number of clusters per vine. Additionally, related traits like cluster architecture and vine balance (relation between vegetative and generative growth) were considered. Quantifying yield parameters on a single vine level is challenging. Complex shapes and slight variations between genotypes make it difficult and very time-consuming.As a first step towards HT phenotyping of yield parameters two fully automatic image interpretation tools have been developed for an application under controlled laboratory conditions to assess individual yield parameters. Using the Cluster Analysis Tool (CAT) four important phenotypic traits can be detected in one image: Cluster length, cluster width, berry size and cluster compactness. The utilization of the Berry Analysis Tool (BAT) provides information on number, size (length and width), and volume of grapevine berries. Both tools offer a fast, user-friendly and cheap procedure to provide several precise phenotypic features of berries and clusters at once with dimensional units in a shorter period of time compared to manual measurements.The similarity of fore- and background in an image captured under field conditions is especially difficult and crucial for image analysis at an early grapevine developmental stage due to the missing canopy. To detect the dormant pruning wood weight, partly determining vine balance, a fast and non-invasive tool for objective data acquisition in the field was developed. In an innovative approach it combines depth map calculation and image segmentation to subtract the background of the vine obtaining the pruning area visible in the image. For the implementation of HT field phenotyping in grapevine breeding a phenotyping pipeline has been set up. It ranges from the automated image acquisition directly in the field using the PHENObot, to data management, data analysis and the interpretation of obtained phenotypic data for grapevine breeding aims. The PHENObot consists of an automated guided tracked vehicle system, a calibrated multi camera system, a Real-Time-Kinematic GPS system and a computer for image data handling. Particularly developed software was applied in order to acquire geo referenced images directly in the vineyard. The geo-reference is afterwards used for the post-processing data management in a database. As phenotypic traits to be analysed within the phenotyping pipeline the detection of berries and the determination of the berry size and colour were considered. The highthroughput phenotyping pipeline was tested in the grapevine repository at Geilweilerhof to extract the characteristics of berry size and berry colour using the Berries In Vineyards (BIVcolor) tool. Image data acquisition took about 20 seconds per vine, which afterwards was followed by the automatic image analysis to extract objective and precise phenotypic data. In was possible to capture images of 2700 vines within 12 hours using the PHENObot and subsequently automatic analysis of the images and extracting berry size and berry colour. With this analysis proof of principle was demonstrated. The pilot pipeline providesthe basis for further development of additional evaluation modules as well as the integration of other sensors

    Computer Vision Problems in 3D Plant Phenotyping

    Get PDF
    In recent years, there has been significant progress in Computer Vision based plant phenotyping (quantitative analysis of biological properties of plants) technologies. Traditional methods of plant phenotyping are destructive, manual and error prone. Due to non-invasiveness and non-contact properties as well as increased accuracy, imaging techniques are becoming state-of-the-art in plant phenotyping. Among several parameters of plant phenotyping, growth analysis is very important for biological inference. Automating the growth analysis can result in accelerating the throughput in crop production. This thesis contributes to the automation of plant growth analysis. First, we present a novel system for automated and non-invasive/non-contact plant growth measurement. We exploit the recent advancements of sophisticated robotic technologies and near infrared laser scanners to build a 3D imaging system and use state-of-the-art Computer Vision algorithms to fully automate growth measurement. We have set up a gantry robot system having 7 degrees of freedom hanging from the roof of a growth chamber. The payload is a range scanner, which can measure dense depth maps (raw 3D coordinate points in mm) on the surface of an object (the plant). The scanner can be moved around the plant to scan from different viewpoints by programming the robot with a specific trajectory. The sequence of overlapping images can be aligned to obtain a full 3D structure of the plant in raw point cloud format, which can be triangulated to obtain a smooth surface (triangular mesh), enclosing the original plant. We show the capability of the system to capture the well known diurnal pattern of plant growth computed from the surface area and volume of the plant meshes for a number of plant species. Second, we propose a technique to detect branch junctions in plant point cloud data. We demonstrate that using these junctions as feature points, the correspondence estimation can be formulated as a subgraph matching problem, and better matching results than state-of-the-art can be achieved. Also, this idea removes the requirement of a priori knowledge about rotational angles between adjacent scanning viewpoints imposed by the original registration algorithm for complex plant data. Before, this angle information had to be approximately known. Third, we present an algorithm to classify partially occluded leaves by their contours. In general, partial contour matching is a NP-hard problem. We propose a suboptimal matching solution and show that our method outperforms state-of-the-art on 3 public leaf datasets. We anticipate using this algorithm to track growing segmented leaves in our plant range data, even when a leaf becomes partially occluded by other plant matter over time. Finally, we perform some experiments to demonstrate the capability and limitations of the system and highlight the future research directions for Computer Vision based plant phenotyping

    Computer Vision Problems in 3D Plant Phenotyping

    Get PDF
    In recent years, there has been significant progress in Computer Vision based plant phenotyping (quantitative analysis of biological properties of plants) technologies. Traditional methods of plant phenotyping are destructive, manual and error prone. Due to non-invasiveness and non-contact properties as well as increased accuracy, imaging techniques are becoming state-of-the-art in plant phenotyping. Among several parameters of plant phenotyping, growth analysis is very important for biological inference. Automating the growth analysis can result in accelerating the throughput in crop production. This thesis contributes to the automation of plant growth analysis. First, we present a novel system for automated and non-invasive/non-contact plant growth measurement. We exploit the recent advancements of sophisticated robotic technologies and near infrared laser scanners to build a 3D imaging system and use state-of-the-art Computer Vision algorithms to fully automate growth measurement. We have set up a gantry robot system having 7 degrees of freedom hanging from the roof of a growth chamber. The payload is a range scanner, which can measure dense depth maps (raw 3D coordinate points in mm) on the surface of an object (the plant). The scanner can be moved around the plant to scan from different viewpoints by programming the robot with a specific trajectory. The sequence of overlapping images can be aligned to obtain a full 3D structure of the plant in raw point cloud format, which can be triangulated to obtain a smooth surface (triangular mesh), enclosing the original plant. We show the capability of the system to capture the well known diurnal pattern of plant growth computed from the surface area and volume of the plant meshes for a number of plant species. Second, we propose a technique to detect branch junctions in plant point cloud data. We demonstrate that using these junctions as feature points, the correspondence estimation can be formulated as a subgraph matching problem, and better matching results than state-of-the-art can be achieved. Also, this idea removes the requirement of a priori knowledge about rotational angles between adjacent scanning viewpoints imposed by the original registration algorithm for complex plant data. Before, this angle information had to be approximately known. Third, we present an algorithm to classify partially occluded leaves by their contours. In general, partial contour matching is a NP-hard problem. We propose a suboptimal matching solution and show that our method outperforms state-of-the-art on 3 public leaf datasets. We anticipate using this algorithm to track growing segmented leaves in our plant range data, even when a leaf becomes partially occluded by other plant matter over time. Finally, we perform some experiments to demonstrate the capability and limitations of the system and highlight the future research directions for Computer Vision based plant phenotyping

    Proceedings of the 7th International Conference on Functional-Structural Plant Models, Saariselkä, Finland, 9 - 14 June 2013

    Get PDF
    corecore