897 research outputs found

    Precision Weed Management Based on UAS Image Streams, Machine Learning, and PWM Sprayers

    Get PDF
    Weed populations in agricultural production fields are often scattered and unevenly distributed; however, herbicides are broadcast across fields evenly. Although effective, in the case of post-emergent herbicides, exceedingly more pesticides are used than necessary. A novel weed detection and control workflow was evaluated targeting Palmer amaranth in soybean (Glycine max) fields. High spatial resolution (0.4 cm) unmanned aircraft system (UAS) image streams were collected, annotated, and used to train 16 object detection convolutional neural networks (CNNs; RetinaNet, Faster R-CNN, Single Shot Detector, and YOLO v3) each trained on imagery with 0.4, 0.6, 0.8, and 1.2 cm spatial resolutions. Models were evaluated on imagery from four production fields containing approximately 7,800 weeds. The highest performing model was Faster R-CNN trained on 0.4 cm imagery (precision = 0.86, recall = 0.98, and F1-score = 0.91). A site-specific workflow leveraging the highest performing trained CNN models was evaluated in replicated field trials. Weed control (%) was compared between a broadcast treatment and the proposed site-specific workflow which was applied using a pulse-width modulated (PWM) sprayer. Results indicate no statistical (p \u3c .05) difference in weed control measured one (M = 96.22%, SD = 3.90 and M = 90.10%, SD = 9.96), two (M = 95.15%, SD = 5.34 and M = 89.64%, SD = 8.58), and three weeks (M = 88.55, SD = 11.07 and M = 81.78%, SD = 13.05) after application between broadcast and site-specific treatments, respectively. Furthermore, there was a significant (p \u3c 0.05) 48% mean reduction in applied area (m2) between broadcast and site-specific treatments across both years. Equivalent post application efficacy can be achieved with significant reductions in herbicides if weeds are targeted through site-specific applications. Site-specific weed maps can be generated and executed using accessible technologies like UAS, open-source CNNs, and PWM sprayers

    Precision Weed Management Based on UAS Image Streams, Machine Learning, and PWM Sprayers

    Get PDF
    Weed populations in agricultural production fields are often scattered and unevenly distributed; however, herbicides are broadcast across fields evenly. Although effective, in the case of post-emergent herbicides, exceedingly more pesticides are used than necessary. A novel weed detection and control workflow was evaluated targeting Palmer amaranth in soybean (Glycine max) fields. High spatial resolution (0.4 cm) unmanned aircraft system (UAS) image streams were collected, annotated, and used to train 16 object detection convolutional neural networks (CNNs; RetinaNet, Faster R-CNN, Single Shot Detector, and YOLO v3) each trained on imagery with 0.4, 0.6, 0.8, and 1.2 cm spatial resolutions. Models were evaluated on imagery from four production fields containing approximately 7,800 weeds. The highest performing model was Faster R-CNN trained on 0.4 cm imagery (precision = 0.86, recall = 0.98, and F1-score = 0.91). A site-specific workflow leveraging the highest performing trained CNN models was evaluated in replicated field trials. Weed control (%) was compared between a broadcast treatment and the proposed site-specific workflow which was applied using a pulse-width modulated (PWM) sprayer. Results indicate no statistical (p \u3c .05) difference in weed control measured one (M = 96.22%, SD = 3.90 and M = 90.10%, SD = 9.96), two (M = 95.15%, SD = 5.34 and M = 89.64%, SD = 8.58), and three weeks (M = 88.55, SD = 11.07 and M = 81.78%, SD = 13.05) after application between broadcast and site-specific treatments, respectively. Furthermore, there was a significant (p \u3c 0.05) 48% mean reduction in applied area (m2) between broadcast and site-specific treatments across both years. Equivalent post application efficacy can be achieved with significant reductions in herbicides if weeds are targeted through site-specific applications. Site-specific weed maps can be generated and executed using accessible technologies like UAS, open-source CNNs, and PWM sprayers

    Prediction of Early Vigor from Overhead Images of Carinata Plants

    Get PDF
    Breeding more resilient, higher yielding crops is an essential component of ensuring ongoing food security. Early season vigor is signi cantly correlated with yields and is often used as an early indicator of tness in breeding programs. Early vigor can be a useful indicator of the health and strength of plants with bene ts such as improved light interception, reduced surface evaporation, and increased biological yield. However, vigor is challenging to measure analytically and is often rated using subjective visual scoring. This traditional method of breeder scoring becomes cumbersome as the size of breeding programs increase. In this study, we used hand-held cameras tted on gimbals to capture images which were then used as the source for automated vigor scoring. We have employed a novel image metric, the extent of plant growth from the row centerline, as an indicator of vigor. Along with this feature, additional features were used for training a random forest model and a support vector machine, both of which were able to predict expert vigor ratings with an 88:9% and 88% accuracies respectively, providing the potential for more reliable, higher throughput vigor estimates

    Perception for context awareness of agricultural robots

    Get PDF
    Context awareness is one key point for the realisation of robust autonomous systems in unstructured environments like agriculture. Robots need a precise description of their environment so that tasks could be planned and executed correctly. When using a robot system in a controlled, not changing environment, the programmer maybe could model all possible circumstances to get the system reliable. However, the situation gets more complex when the environment and the objects are changing their shape, position or behaviour. Perception for context awareness in agriculture means to detect and classify objects of interest in the environment correctly and react to them. The aim of this cumulative dissertation was to apply different strategies to increase context awareness with perception in mobile robots in agriculture. The objectives of this thesis were to address five aspects of environment perception: (I) test static local sensor communication with a mobile vehicle, (II) detect unstructured objects in a controlled environment, (III) describe the influence of growth stage to algorithm outcomes, (IV) use the gained sensor information to detect single plants and (V) improve the robustness of algorithms under noisy conditions. First, the communication between a static Wireless Sensor Network and a mobile robot was investigated. The wireless sensor nodes were able to send local data from sensors attached to the systems. The sensors were placed in a vineyard and the robot followed automatically the row structure to receive the data. It was possible to localize the single nodes just with the exact robot position and the attenuation model of the received signal strength with triangulation. The precision was 0.6 m and more precise than a provided differential global navigation satellite system signal. The second research area focused on the detection of unstructured objects in point clouds. Therefore, a low-cost sonar sensor was attached to a 3D-frame with millimetre level accuracy to exactly localize the sensor position. With the sensor position and the sensor reading, a 3D point cloud was created. In the workspace, 10 individual plant species were placed. They could be detected automatically with an accuracy of 2.7 cm. An attached valve was able to spray these specific plant positions, which resulted in a liquid saving of 72%, compared to a conventional spraying method, covering the whole crop row area. As plants are dynamic objects, the third objective of describing the plant growth with adequate sensor data, was important to characterise the unstructured agriculture domain. For revering and testing algorithms to the same data, maize rows were planted in a greenhouse. The exact positions of all plants were measured with a total station. Then a robot vehicle was guided through the crop rows and the data of attached sensors were recorded. With the help of the total station, it was possible to track down the vehicle position and to refer all data to the same coordinate frame. The data recording was performed over 7 times over a period of 6 weeks. This created datasets could afterwards be used to assess different algorithms and to test them against different growth changes of the plants. It could be shown that a basic RANSAC line following algorithm could not perform correctly under all growth stages without additional filtering. The fourth paper used this created datasets to search for single plants with a sensor normally used for obstacle avoidance. One tilted laser scanner was used with the exact robot position to create 3D point clouds, where two different methods for single plant detection were applied. Both methods used the spacing to detect single plants. The second method used the fixed plant spacing and row beginning, to resolve the plant positions iteratively. The first method reached detection rates of 73.7% and a root mean square error of 3.6 cm. The iterative second method reached a detection rate of 100% with an accuracy of 2.6 - 3.0 cm. For assessing the robustness of the plant detection, an algorithm was used to detect the plant positions in six different growth stages of the given datasets. A graph-cut based algorithm was used, what improved the results for single plant detection. As the algorithm was not sensitive against overlaying and noisy point clouds, a detection rate of 100% was realised, with an accuracy for the estimated height of the plants with 1.55 cm. The stem position was resolved with an accuracy of 2.05 cm. This thesis showed up different methods of perception for context awareness, which could help to improve the robustness of robots in agriculture. When the objects in the environment are known, it could be possible to react and interact smarter with the environment as it is the case in agricultural robotics. Especially the detection of single plants before the robot reaches them could help to improve the navigation and interaction of agricultural robots.Kontextwahrnehmung ist eine Schlüsselfunktion für die Realisierung von robusten autonomen Systemen in einer unstrukturierten Umgebung wie der Landwirtschaft. Roboter benötigen eine präzise Beschreibung ihrer Umgebung, so dass Aufgaben korrekt geplant und durchgeführt werden können. Wenn ein Roboter System in einer kontrollierten und sich nicht ändernden Umgebung eingesetzt wird, kann der Programmierer möglicherweise ein Modell erstellen, welches alle möglichen Umstände einbindet, um ein zuverlässiges System zu erhalten. Jedoch wird dies komplexer, wenn die Objekte und die Umwelt ihr Erscheinungsbild, Position und Verhalten ändern. Umgebungserkennung für Kontextwahrnehmung in der Landwirtschaft bedeutet relevante Objekte in der Umgebung zu erkennen, zu klassifizieren und auf diese zu reagieren. Ziel dieser kumulativen Dissertation war, verschiedene Strategien anzuwenden, um das Kontextbewusstsein mit Wahrnehmung bei mobilen Robotern in der Landwirtschaft zu erhöhen. Die Ziele dieser Arbeit waren fünf Aspekte von Umgebungserkennung zu adressieren: (I) Statische lokale Sensorkommunikation mit einem mobilen Fahrzeug zu testen, (II) unstrukturierte Objekte in einer kontrollierten Umgebung erkennen, (III) die Einflüsse von Wachstum der Pflanzen auf Algorithmen und ihre Ergebnisse zu beschreiben, (IV) gewonnene Sensorinformation zu benutzen, um Einzelpflanzen zu erkennen und (V) die Robustheit von Algorithmen unter verschiedenen Fehlereinflüssen zu verbessern. Als erstes wurde die Kommunikation zwischen einem statischen drahtlosen Sensor-Netzwerk und einem mobilen Roboter untersucht. Die drahtlosen Sensorknoten konnten Daten von lokal angeschlossenen Sensoren übermitteln. Die Sensoren wurden in einem Weingut verteilt und der Roboter folgte automatisch der Reihenstruktur, um die gesendeten Daten zu empfangen. Es war möglich, die Sendeknoten mithilfe von Triangulation aus der exakten Roboterposition und eines Sendesignal-Dämpfung-Modells zu lokalisieren. Die Genauigkeit war 0.6 m und somit genauer als das verfügbare Positionssignal eines differential global navigation satellite system. Der zweite Forschungsbereich fokussierte sich auf die Entdeckung von unstrukturierten Objekten in Punktewolken. Dafür wurde ein kostengünstiger Ultraschallsensor auf einen 3D Bewegungsrahmen mit einer Millimeter Genauigkeit befestigt, um die genaue Sensorposition bestimmen zu können. Mit der Sensorposition und den Sensordaten wurde eine 3D Punktewolke erstellt. Innerhalb des Arbeitsbereichs des 3D Bewegungsrahmens wurden 10 einzelne Pflanzen platziert. Diese konnten automatisch mit einer Genauigkeit von 2.7 cm erkannt werden. Eine angebaute Pumpe ermöglichte das punktuelle Besprühen der spezifischen Pflanzenpositionen, was zu einer Flüssigkeitsersparnis von 72%, verglichen mit einer konventionellen Methode welche die gesamte Pflanzenfläche benetzt, führte. Da Pflanzen sich ändernde Objekte sind, war das dritte Ziel das Pflanzenwachstum mit geeigneten Sensordaten zu beschreiben, was wichtig ist, um unstrukturierte Umgebung der Landwirtschaft zu charakterisieren. Um Algorithmen mit denselben Daten zu referenzieren und zu testen, wurden Maisreihen in einem Gewächshaus gepflanzt. Die exakte Position jeder einzelnen Pflanze wurde mit einer Totalstation gemessen. Anschließend wurde ein Roboterfahrzeug durch die Reihen gelenkt und die Daten der angebauten Sensoren wurden aufgezeichnet. Mithilfe der Totalstation war es möglich, die Fahrzeugposition zu ermitteln und alle Daten in dasselbe Koordinatensystem zu transformieren. Die Datenaufzeichnungen erfolgten 7-mal über einen Zeitraum von 6 Wochen. Diese generierten Datensätze konnten anschließend benutzt werden, um verschiedene Algorithmen unter verschiedenen Wachstumsstufen der Pflanzen zu testen. Es konnte gezeigt werden, dass ein Standard RANSAC Linien Erkennungsalgorithmus nicht fehlerfrei arbeiten kann, wenn keine zusätzliche Filterung eingesetzt wird. Die vierte Publikation nutzte diese generierten Datensätze, um nach Einzelpflanzen mithilfe eines Sensors zu suchen, der normalerweise für die Hinderniserkennung benutzt wird. Ein gekippter Laserscanner wurde zusammen mit der exakten Roboterposition benutzt, um eine 3D Punktewolke zu generieren. Zwei verschiedene Methoden für Einzelpflanzenerkennung wurden angewendet. Beide Methoden nutzten Abstände, um die Einzelpflanzen zu erkennen. Die zweite Methode nutzte den bekannten Pflanzenabstand und den Reihenanfang, um die Pflanzenpositionen iterativ zu erkennen. Die erste Methode erreichte eine Erkennungsrate von 73.7% und damit einen quadratischen Mittelwertfehler von 3.6 cm. Die iterative zweite Methode erreichte eine Erkennungsrate von bis zu 100% mit einer Genauigkeit von 2.6-3.0 cm. Um die Robustheit der Pflanzenerkennung zu bewerten, wurde ein Algorithmus zur Erkennung von Einzelpflanzen in sechs verschiedenen Wachstumsstufen der Datasets eingesetzt. Hier wurde ein graph-cut basierter Algorithmus benutzt, welcher die Robustheit der Ergebnisse für die Einzelpflanzenerkennung erhöhte. Da der Algorithmus nicht empfindlich gegen ungenaue und fehlerhafte Punktewolken ist, wurde eine Erkennungsrate von 100% mit einer Genauigkeit von 1.55 cm für die Höhe der Pflanzen erreicht. Der Stiel der Pflanzen wurde mit einer Genauigkeit von 2.05 cm erkannt. Diese Arbeit zeigte verschiedene Methoden für die Erkennung von Kontextwahrnehmung, was helfen kann, um die Robustheit von Robotern in der Landwirtschaft zu erhöhen. Wenn die Objekte in der Umwelt bekannt sind, könnte es möglich sein, intelligenter auf die Umwelt zu reagieren und zu interagieren, wie es aktuell der Fall in der Landwirtschaftsrobotik ist. Besonders die Erkennung von Einzelpflanzen bevor der Roboter sie erreicht, könnte helfen die Navigation und Interaktion von Robotern in der Landwirtschaft verbessern

    Artificial Neural Networks in Agriculture

    Get PDF
    Modern agriculture needs to have high production efficiency combined with a high quality of obtained products. This applies to both crop and livestock production. To meet these requirements, advanced methods of data analysis are more and more frequently used, including those derived from artificial intelligence methods. Artificial neural networks (ANNs) are one of the most popular tools of this kind. They are widely used in solving various classification and prediction tasks, for some time also in the broadly defined field of agriculture. They can form part of precision farming and decision support systems. Artificial neural networks can replace the classical methods of modelling many issues, and are one of the main alternatives to classical mathematical models. The spectrum of applications of artificial neural networks is very wide. For a long time now, researchers from all over the world have been using these tools to support agricultural production, making it more efficient and providing the highest-quality products possible

    Sustainable Agriculture and Advances of Remote Sensing (Volume 2)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publication of the results, among others
    corecore