96 research outputs found

    A Novel Perception and Semantic Mapping Method for Robot Autonomy in Orchards

    Full text link
    In this work, we propose a novel framework for achieving robotic autonomy in orchards. It consists of two key steps: perception and semantic mapping. In the perception step, we introduce a 3D detection method that accurately identifies objects directly on point cloud maps. In the semantic mapping step, we develop a mapping module that constructs a visibility graph map by incorporating object-level information and terrain analysis. By combining these two steps, our framework improves the autonomy of agricultural robots in orchard environments. The accurate detection of objects and the construction of a semantic map enable the robot to navigate autonomously, perform tasks such as fruit harvesting, and acquire actionable information for efficient agricultural production

    Localization, Navigation and Activity Planning for Wheeled Agricultural Robots – A Survey

    Get PDF
    Source at:https://fruct.org/publications/volume-32/fruct32/High cost, time intensive work, labor shortages and inefficient strategies have raised the need of employing mobile robotics to fully automate agricultural tasks and fulfil the requirements of precision agriculture. In order to perform an agricultural task, the mobile robot goes through a sequence of sub operations and integration of hardware and software systems. Starting with localization, an agricultural robot uses sensor systems to estimate its current position and orientation in field, employs algorithms to find optimal paths and reach target positions. It then uses techniques and models to perform feature recognition and finally executes the agricultural task through an end effector. This article, compiled through scrutinizing the current literature, is a step-by-step approach of the strategies and ways these sub-operations are performed and integrated together. An analysis has also been done on the limitations in each sub operation, available solutions, and the ongoing research focus

    Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control

    Full text link
    This paper provides an overview of the current state-of-the-art in selective harvesting robots (SHRs) and their potential for addressing the challenges of global food production. SHRs have the potential to increase productivity, reduce labour costs, and minimise food waste by selectively harvesting only ripe fruits and vegetables. The paper discusses the main components of SHRs, including perception, grasping, cutting, motion planning, and control. It also highlights the challenges in developing SHR technologies, particularly in the areas of robot design, motion planning and control. The paper also discusses the potential benefits of integrating AI and soft robots and data-driven methods to enhance the performance and robustness of SHR systems. Finally, the paper identifies several open research questions in the field and highlights the need for further research and development efforts to advance SHR technologies to meet the challenges of global food production. Overall, this paper provides a starting point for researchers and practitioners interested in developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic

    Robots in Agriculture: State of Art and Practical Experiences

    Get PDF
    The presence of robots in agriculture has grown significantly in recent years, overcoming some of the challenges and complications of this field. This chapter aims to collect a complete and recent state of the art about the application of robots in agriculture. The work addresses this topic from two perspectives. On the one hand, it involves the disciplines that lead the automation of agriculture, such as precision agriculture and greenhouse farming, and collects the proposals for automatizing tasks like planting and harvesting, environmental monitoring and crop inspection and treatment. On the other hand, it compiles and analyses the robots that are proposed to accomplish these tasks: e.g. manipulators, ground vehicles and aerial robots. Additionally, the chapter reports with more detail some practical experiences about the application of robot teams to crop inspection and treatment in outdoor agriculture, as well as to environmental monitoring in greenhouse farming

    Towards autonomous mapping in agriculture: A review of supportive technologies for ground robotics

    Get PDF
    This paper surveys the supportive technologies currently available for ground mobile robots used for autonomous mapping in agriculture. Unlike previous reviews, we describe state-of-the-art approaches and technologies aimed at extracting information from agricultural environments, not only for navigation purposes but especially for mapping and monitoring. The state-of-the-art platforms and sensors, the modern localization techniques, the navigation and path planning approaches, as well as the potentialities of artificial intelligence towards autonomous mapping in agriculture are analyzed. According to the findings of this review, many examples of recent mobile robots provide full navigation and autonomous mapping capability. Significant resources are currently devoted to this research area, in order to further improve mobile robot capabilities in this complex and challenging field

    Computer Vision and Machine Learning Based Grape Fruit Cluster Detection and Yield Estimation Robot

    Get PDF
    Estimation and detection of fruits plays a crucial role in harvesting. Traditionally, fruit growers rely on manual methods but nowadays they are facing problems of rapidly increasing labor costs and labour shortage. Earlier various techniques were developed using hyper spectral cameras, 3D images, clour based segmentation where it was difficult to find and distinguish grape bunches. In this research computer vision based novel approach is implemented using Open Source Computer Vision Library (OpenCV) and Random Forest machine learning algorithm for counting, detecting and segmentation of blue grape bunches. Here, fruit object segmentation is based on a binary threshold and Otsu method. For training and testing, classification based on pixel intensities were taken by a single image related to grape and non-grape fruit. The validation of developed technique represented by random forest algorithm achieved a good result with an accuracy score of 97.5% and F1-Score of 90.7% as compared to Support Vector Machine (SVM). The presented research pipeline for grape fruit bunch detection with noise removal, training, segmentation and classification techniques exhibit improved accuracy

    Robotic Crop Interaction in Agriculture for Soft Fruit Harvesting

    Get PDF
    Autonomous tree crop harvesting has been a seemingly attainable, but elusive, robotics goal for the past several decades. Limiting grower reliance on uncertain seasonal labour is an economic driver of this, but the ability of robotic systems to treat each plant individually also has environmental benefits, such as reduced emissions and fertiliser use. Over the same time period, effective grasping and manipulation (G&M) solutions to warehouse product handling, and more general robotic interaction, have been demonstrated. Despite research progress in general robotic interaction and harvesting of some specific crop types, a commercially successful robotic harvester has yet to be demonstrated. Most crop varieties, including soft-skinned fruit, have not yet been addressed. Soft fruit, such as plums, present problems for many of the techniques employed for their more robust relatives and require special focus when developing autonomous harvesters. Adapting existing robotics tools and techniques to new fruit types, including soft skinned varieties, is not well explored. This thesis aims to bridge that gap by examining the challenges of autonomous crop interaction for the harvesting of soft fruit. Aspects which are known to be challenging include mixed obstacle planning with both hard and soft obstacles present, poor outdoor sensing conditions, and the lack of proven picking motion strategies. Positioning an actuator for harvesting requires solving these problems and others specific to soft skinned fruit. Doing so effectively means addressing these in the sensing, planning and actuation areas of a robotic system. Such areas are also highly interdependent for grasping and manipulation tasks, so solutions need to be developed at the system level. In this thesis, soft robotics actuators, with simplifying assumptions about hard obstacle planes, are used to solve mixed obstacle planning. Persistent target tracking and filtering is used to overcome challenging object detection conditions, while multiple stages of object detection are applied to refine these initial position estimates. Several picking motions are developed and tested for plums, with varying degrees of effectiveness. These various techniques are integrated into a prototype system which is validated in lab testing and extensive field trials on a commercial plum crop. Key contributions of this thesis include I. The examination of grasping & manipulation tools, algorithms, techniques and challenges for harvesting soft skinned fruit II. Design, development and field-trial evaluation of a harvester prototype to validate these concepts in practice, with specific design studies of the gripper type, object detector architecture and picking motion for this III. Investigation of specific G&M module improvements including: o Application of the autocovariance least squares (ALS) method to noise covariance matrix estimation for visual servoing tasks, where both simulated and real experiments demonstrated a 30% improvement in state estimation error using this technique. o Theory and experimentation showing that a single range measurement is sufficient for disambiguating scene scale in monocular depth estimation for some datasets. o Preliminary investigations of stochastic object completion and sampling for grasping, active perception for visual servoing based harvesting, and multi-stage fruit localisation from RGB-Depth data. Several field trials were carried out with the plum harvesting prototype. Testing on an unmodified commercial plum crop, in all weather conditions, showed promising results with a harvest success rate of 42%. While a significant gap between prototype performance and commercial viability remains, the use of soft robotics with carefully chosen sensing and planning approaches allows for robust grasping & manipulation under challenging conditions, with both hard and soft obstacles

    World Models for Robust Robotic Systems

    Get PDF

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Perception for context awareness of agricultural robots

    Get PDF
    Context awareness is one key point for the realisation of robust autonomous systems in unstructured environments like agriculture. Robots need a precise description of their environment so that tasks could be planned and executed correctly. When using a robot system in a controlled, not changing environment, the programmer maybe could model all possible circumstances to get the system reliable. However, the situation gets more complex when the environment and the objects are changing their shape, position or behaviour. Perception for context awareness in agriculture means to detect and classify objects of interest in the environment correctly and react to them. The aim of this cumulative dissertation was to apply different strategies to increase context awareness with perception in mobile robots in agriculture. The objectives of this thesis were to address five aspects of environment perception: (I) test static local sensor communication with a mobile vehicle, (II) detect unstructured objects in a controlled environment, (III) describe the influence of growth stage to algorithm outcomes, (IV) use the gained sensor information to detect single plants and (V) improve the robustness of algorithms under noisy conditions. First, the communication between a static Wireless Sensor Network and a mobile robot was investigated. The wireless sensor nodes were able to send local data from sensors attached to the systems. The sensors were placed in a vineyard and the robot followed automatically the row structure to receive the data. It was possible to localize the single nodes just with the exact robot position and the attenuation model of the received signal strength with triangulation. The precision was 0.6 m and more precise than a provided differential global navigation satellite system signal. The second research area focused on the detection of unstructured objects in point clouds. Therefore, a low-cost sonar sensor was attached to a 3D-frame with millimetre level accuracy to exactly localize the sensor position. With the sensor position and the sensor reading, a 3D point cloud was created. In the workspace, 10 individual plant species were placed. They could be detected automatically with an accuracy of 2.7 cm. An attached valve was able to spray these specific plant positions, which resulted in a liquid saving of 72%, compared to a conventional spraying method, covering the whole crop row area. As plants are dynamic objects, the third objective of describing the plant growth with adequate sensor data, was important to characterise the unstructured agriculture domain. For revering and testing algorithms to the same data, maize rows were planted in a greenhouse. The exact positions of all plants were measured with a total station. Then a robot vehicle was guided through the crop rows and the data of attached sensors were recorded. With the help of the total station, it was possible to track down the vehicle position and to refer all data to the same coordinate frame. The data recording was performed over 7 times over a period of 6 weeks. This created datasets could afterwards be used to assess different algorithms and to test them against different growth changes of the plants. It could be shown that a basic RANSAC line following algorithm could not perform correctly under all growth stages without additional filtering. The fourth paper used this created datasets to search for single plants with a sensor normally used for obstacle avoidance. One tilted laser scanner was used with the exact robot position to create 3D point clouds, where two different methods for single plant detection were applied. Both methods used the spacing to detect single plants. The second method used the fixed plant spacing and row beginning, to resolve the plant positions iteratively. The first method reached detection rates of 73.7% and a root mean square error of 3.6 cm. The iterative second method reached a detection rate of 100% with an accuracy of 2.6 - 3.0 cm. For assessing the robustness of the plant detection, an algorithm was used to detect the plant positions in six different growth stages of the given datasets. A graph-cut based algorithm was used, what improved the results for single plant detection. As the algorithm was not sensitive against overlaying and noisy point clouds, a detection rate of 100% was realised, with an accuracy for the estimated height of the plants with 1.55 cm. The stem position was resolved with an accuracy of 2.05 cm. This thesis showed up different methods of perception for context awareness, which could help to improve the robustness of robots in agriculture. When the objects in the environment are known, it could be possible to react and interact smarter with the environment as it is the case in agricultural robotics. Especially the detection of single plants before the robot reaches them could help to improve the navigation and interaction of agricultural robots.Kontextwahrnehmung ist eine Schlüsselfunktion für die Realisierung von robusten autonomen Systemen in einer unstrukturierten Umgebung wie der Landwirtschaft. Roboter benötigen eine präzise Beschreibung ihrer Umgebung, so dass Aufgaben korrekt geplant und durchgeführt werden können. Wenn ein Roboter System in einer kontrollierten und sich nicht ändernden Umgebung eingesetzt wird, kann der Programmierer möglicherweise ein Modell erstellen, welches alle möglichen Umstände einbindet, um ein zuverlässiges System zu erhalten. Jedoch wird dies komplexer, wenn die Objekte und die Umwelt ihr Erscheinungsbild, Position und Verhalten ändern. Umgebungserkennung für Kontextwahrnehmung in der Landwirtschaft bedeutet relevante Objekte in der Umgebung zu erkennen, zu klassifizieren und auf diese zu reagieren. Ziel dieser kumulativen Dissertation war, verschiedene Strategien anzuwenden, um das Kontextbewusstsein mit Wahrnehmung bei mobilen Robotern in der Landwirtschaft zu erhöhen. Die Ziele dieser Arbeit waren fünf Aspekte von Umgebungserkennung zu adressieren: (I) Statische lokale Sensorkommunikation mit einem mobilen Fahrzeug zu testen, (II) unstrukturierte Objekte in einer kontrollierten Umgebung erkennen, (III) die Einflüsse von Wachstum der Pflanzen auf Algorithmen und ihre Ergebnisse zu beschreiben, (IV) gewonnene Sensorinformation zu benutzen, um Einzelpflanzen zu erkennen und (V) die Robustheit von Algorithmen unter verschiedenen Fehlereinflüssen zu verbessern. Als erstes wurde die Kommunikation zwischen einem statischen drahtlosen Sensor-Netzwerk und einem mobilen Roboter untersucht. Die drahtlosen Sensorknoten konnten Daten von lokal angeschlossenen Sensoren übermitteln. Die Sensoren wurden in einem Weingut verteilt und der Roboter folgte automatisch der Reihenstruktur, um die gesendeten Daten zu empfangen. Es war möglich, die Sendeknoten mithilfe von Triangulation aus der exakten Roboterposition und eines Sendesignal-Dämpfung-Modells zu lokalisieren. Die Genauigkeit war 0.6 m und somit genauer als das verfügbare Positionssignal eines differential global navigation satellite system. Der zweite Forschungsbereich fokussierte sich auf die Entdeckung von unstrukturierten Objekten in Punktewolken. Dafür wurde ein kostengünstiger Ultraschallsensor auf einen 3D Bewegungsrahmen mit einer Millimeter Genauigkeit befestigt, um die genaue Sensorposition bestimmen zu können. Mit der Sensorposition und den Sensordaten wurde eine 3D Punktewolke erstellt. Innerhalb des Arbeitsbereichs des 3D Bewegungsrahmens wurden 10 einzelne Pflanzen platziert. Diese konnten automatisch mit einer Genauigkeit von 2.7 cm erkannt werden. Eine angebaute Pumpe ermöglichte das punktuelle Besprühen der spezifischen Pflanzenpositionen, was zu einer Flüssigkeitsersparnis von 72%, verglichen mit einer konventionellen Methode welche die gesamte Pflanzenfläche benetzt, führte. Da Pflanzen sich ändernde Objekte sind, war das dritte Ziel das Pflanzenwachstum mit geeigneten Sensordaten zu beschreiben, was wichtig ist, um unstrukturierte Umgebung der Landwirtschaft zu charakterisieren. Um Algorithmen mit denselben Daten zu referenzieren und zu testen, wurden Maisreihen in einem Gewächshaus gepflanzt. Die exakte Position jeder einzelnen Pflanze wurde mit einer Totalstation gemessen. Anschließend wurde ein Roboterfahrzeug durch die Reihen gelenkt und die Daten der angebauten Sensoren wurden aufgezeichnet. Mithilfe der Totalstation war es möglich, die Fahrzeugposition zu ermitteln und alle Daten in dasselbe Koordinatensystem zu transformieren. Die Datenaufzeichnungen erfolgten 7-mal über einen Zeitraum von 6 Wochen. Diese generierten Datensätze konnten anschließend benutzt werden, um verschiedene Algorithmen unter verschiedenen Wachstumsstufen der Pflanzen zu testen. Es konnte gezeigt werden, dass ein Standard RANSAC Linien Erkennungsalgorithmus nicht fehlerfrei arbeiten kann, wenn keine zusätzliche Filterung eingesetzt wird. Die vierte Publikation nutzte diese generierten Datensätze, um nach Einzelpflanzen mithilfe eines Sensors zu suchen, der normalerweise für die Hinderniserkennung benutzt wird. Ein gekippter Laserscanner wurde zusammen mit der exakten Roboterposition benutzt, um eine 3D Punktewolke zu generieren. Zwei verschiedene Methoden für Einzelpflanzenerkennung wurden angewendet. Beide Methoden nutzten Abstände, um die Einzelpflanzen zu erkennen. Die zweite Methode nutzte den bekannten Pflanzenabstand und den Reihenanfang, um die Pflanzenpositionen iterativ zu erkennen. Die erste Methode erreichte eine Erkennungsrate von 73.7% und damit einen quadratischen Mittelwertfehler von 3.6 cm. Die iterative zweite Methode erreichte eine Erkennungsrate von bis zu 100% mit einer Genauigkeit von 2.6-3.0 cm. Um die Robustheit der Pflanzenerkennung zu bewerten, wurde ein Algorithmus zur Erkennung von Einzelpflanzen in sechs verschiedenen Wachstumsstufen der Datasets eingesetzt. Hier wurde ein graph-cut basierter Algorithmus benutzt, welcher die Robustheit der Ergebnisse für die Einzelpflanzenerkennung erhöhte. Da der Algorithmus nicht empfindlich gegen ungenaue und fehlerhafte Punktewolken ist, wurde eine Erkennungsrate von 100% mit einer Genauigkeit von 1.55 cm für die Höhe der Pflanzen erreicht. Der Stiel der Pflanzen wurde mit einer Genauigkeit von 2.05 cm erkannt. Diese Arbeit zeigte verschiedene Methoden für die Erkennung von Kontextwahrnehmung, was helfen kann, um die Robustheit von Robotern in der Landwirtschaft zu erhöhen. Wenn die Objekte in der Umwelt bekannt sind, könnte es möglich sein, intelligenter auf die Umwelt zu reagieren und zu interagieren, wie es aktuell der Fall in der Landwirtschaftsrobotik ist. Besonders die Erkennung von Einzelpflanzen bevor der Roboter sie erreicht, könnte helfen die Navigation und Interaktion von Robotern in der Landwirtschaft verbessern
    corecore