68 research outputs found

    Localization, Navigation and Activity Planning for Wheeled Agricultural Robots – A Survey

    Get PDF
    Source at:https://fruct.org/publications/volume-32/fruct32/High cost, time intensive work, labor shortages and inefficient strategies have raised the need of employing mobile robotics to fully automate agricultural tasks and fulfil the requirements of precision agriculture. In order to perform an agricultural task, the mobile robot goes through a sequence of sub operations and integration of hardware and software systems. Starting with localization, an agricultural robot uses sensor systems to estimate its current position and orientation in field, employs algorithms to find optimal paths and reach target positions. It then uses techniques and models to perform feature recognition and finally executes the agricultural task through an end effector. This article, compiled through scrutinizing the current literature, is a step-by-step approach of the strategies and ways these sub-operations are performed and integrated together. An analysis has also been done on the limitations in each sub operation, available solutions, and the ongoing research focus

    Unmanned aerial vehicle based tree canopy characteristics measurement for precision spray applications

    Get PDF
    The critical components for applying the correct amount of agrochemicals are fruit tree characteristics such as canopy height, canopy volume, and canopy coverage. An unmanned aerial vehicle (UAV)-based tree canopy characteristics measurement system was developed using image processing approaches. The UAV captured images using a high-resolution red-green-blue (RGB) camera. A digital surface model (DSM) and a digital terrain model (DTM) were generated from the captured images. A tree canopy height map was generated from the subtraction of DSM and DTM. A total of 24 apple trees were randomly targeted to measure the canopy characteristics. Region of interest (ROI) was generated across the boundary of each targeted tree. The height of all pixels within each ROI was computed separately. The pixel with maximum height was considered as the height of the respective tree. For computing canopy volume, the sum of all pixel heights from individual ROI was multiplied by the square of ground sample distance (GSD) of 5.69 mm·pixel−1. A segmentation method was employed to calculate the canopy coverage of the individual trees. The segmented canopy pixel area was divided by the total pixel area within the ROI. The results showed an average relative error of 0.2 m(6.64%) while comparing automatically measured tree height with ground measurements. For tree canopy volume, a mean absolute error of 0.25 m3 and a root mean square error of 0.33 m3 were achieved. The study estimated the possible agrochemical requirement for spraying the fruit trees, ranging from 0.1 to 0.32 l based on tree canopy volumes. The overall investigations suggest that the UAV-based tree canopy characteristics measurements could be a potential tool to calculate the pesticide requirement for precision spraying applications in tree fruit orchards

    Fruit Detection and Tree Segmentation for Yield Mapping in Orchards

    Get PDF
    Accurate information gathering and processing is critical for precision horticulture, as growers aim to optimise their farm management practices. An accurate inventory of the crop that details its spatial distribution along with health and maturity, can help farmers efficiently target processes such as chemical and fertiliser spraying, crop thinning, harvest management, labour planning and marketing. Growers have traditionally obtained this information by using manual sampling techniques, which tend to be labour intensive, spatially sparse, expensive, inaccurate and prone to subjective biases. Recent advances in sensing and automation for field robotics allow for key measurements to be made for individual plants throughout an orchard in a timely and accurate manner. Farmer operated machines or unmanned robotic platforms can be equipped with a range of sensors to capture a detailed representation over large areas. Robust and accurate data processing techniques are therefore required to extract high level information needed by the grower to support precision farming. This thesis focuses on yield mapping in orchards using image and light detection and ranging (LiDAR) data captured using an unmanned ground vehicle (UGV). The contribution is the framework and algorithmic components for orchard mapping and yield estimation that is applicable to different fruit types and orchard configurations. The framework includes detection of fruits in individual images and tracking them over subsequent frames. The fruit counts are then associated to individual trees, which are segmented from image and LiDAR data, resulting in a structured spatial representation of yield. The first contribution of this thesis is the development of a generic and robust fruit detection algorithm. Images captured in the outdoor environment are susceptible to highly variable external factors that lead to significant appearance variations. Specifically in orchards, variability is caused by changes in illumination, target pose, tree types, etc. The proposed techniques address these issues by using state-of-the-art feature learning approaches for image classification, while investigating the utility of orchard domain knowledge for fruit detection. Detection is performed using both pixel-wise classification of images followed instance segmentation, and bounding-box regression approaches. The experimental results illustrate the versatility of complex deep learning approaches over a multitude of fruit types. The second contribution of this thesis is a tree segmentation approach to detect the individual trees that serve as a standard unit for structured orchard information systems. The work focuses on trellised trees, which present unique challenges for segmentation algorithms due to their intertwined nature. LiDAR data are used to segment the trellis face, and to generate proposals for individual trees trunks. Additional trunk proposals are provided using pixel-wise classification of the image data. The multi-modal observations are fine-tuned by modelling trunk locations using a hidden semi-Markov model (HSMM), within which prior knowledge of tree spacing is incorporated. The final component of this thesis addresses the visual occlusion of fruit within geometrically complex canopies by using a multi-view detection and tracking approach. Single image fruit detections are tracked over a sequence of images, and associated to individual trees or farm rows, with the spatial distribution of the fruit counting forming a yield map over the farm. The results show the advantage of using multi-view imagery (instead of single view analysis) for fruit counting and yield mapping. This thesis includes extensive experimentation in almond, apple and mango orchards, with data captured by a UGV spanning a total of 5 hectares of farm area, over 30 km of vehicle traversal and more than 7,000 trees. The validation of the different processes is performed using manual annotations, which includes fruit and tree locations in image and LiDAR data respectively. Additional evaluation of yield mapping is performed by comparison against fruit counts on trees at the farm and counts made by the growers post-harvest. The framework developed in this thesis is demonstrated to be accurate compared to ground truth at all scales of the pipeline, including fruit detection and tree mapping, leading to accurate yield estimation, per tree and per row, for the different crops. Through the multitude of field experiments conducted over multiple seasons and years, the thesis presents key practical insights necessary for commercial development of an information gathering system in orchards

    New strategies for row-crop management based on cost-effective remote sensors

    Get PDF
    Agricultural technology can be an excellent antidote to resource scarcity. Its growth has led to the extensive study of spatial and temporal in-field variability. The challenge of accurate management has been addressed in recent years through the use of accurate high-cost measurement instruments by researchers. However, low rates of technological adoption by farmers motivate the development of alternative technologies based on affordable sensors, in order to improve the sustainability of agricultural biosystems. This doctoral thesis has as main objective the development and evaluation of systems based on affordable sensors, in order to address two of the main aspects affecting the producers: the need of an accurate plant water status characterization to perform a proper irrigation management and the precise weed control. To address the first objective, two data acquisition methodologies based on aerial platforms have been developed, seeking to compare the use of infrared thermometry and thermal imaging to determine the water status of two most relevant row-crops in the region, sugar beet and super high-density olive orchards. From the data obtained, the use of an airborne low-cost infrared sensor to determine the canopy temperature has been validated. Also the reliability of sugar beet canopy temperature as an indicator its of water status has been confirmed. The empirical development of the Crop Water Stress Index (CWSI) has also been carried out from aerial thermal imaging combined with infrared temperature sensors and ground measurements of factors such as water potential or stomatal conductance, validating its usefulness as an indicator of water status in super high-density olive orchards. To contribute to the development of precise weed control systems, a system for detecting tomato plants and measuring the space between them has been developed, aiming to perform intra-row treatments in a localized and precise way. To this end, low cost optical sensors have been used and compared with a commercial LiDAR laser scanner. Correct detection results close to 95% show that the implementation of these sensors can lead to promising advances in the automation of weed control. The micro-level field data collected from the evaluated affordable sensors can help farmers to target operations precisely before plant stress sets in or weeds infestation occurs, paving the path to increase the adoption of Precision Agriculture techniques

    Orchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion

    Get PDF
    Agricultural mobile robots have great potential to effectively implement different agricultural tasks. They can save human labour costs, avoid the need for people having to perform risky operations and increase productivity. Automation and advanced sensing technologies can provide up-to-date information that helps farmers in orchard management. Data collected from on-board sensors on a mobile robot provide information that can help the farmer detect tree or fruit diseases or damage, measure tree canopy volume and monitor fruit development. In orchards, trees are natural landmarks providing suitable cues for mobile robot localisation and navigation as trees are nominally planted in straight and parallel rows. This thesis presents a novel tree trunk detection algorithm that detects trees and discriminates between trees and non-tree objects in the orchard using a camera and 2D laser scanner data fusion. A local orchard map of the individual trees was developed allowing the mobile robot to navigate to a specific tree in the orchard to perform a specific task such as tree inspection. Furthermore, this thesis presents a localisation algorithm that does not rely on GPS positions and depends only on the on-board sensors of the mobile robot without adding any artificial landmarks, respective tapes or tags to the trees. The novel tree trunk detection algorithm combined the features extracted from a low cost camera's images and 2D laser scanner data to increase the robustness of the detection. The developed algorithm used a new method to detect the edge points and determine the width of the tree trunks and non-tree objects from the laser scan data. Then a projection of the edge points from the laser scanner coordinates to the image plane was implemented to construct a region of interest with the required features for tree trunk colour and edge detection. The camera images were used to verify the colour and the parallel edges of the tree trunks and non-tree objects. The algorithm automatically adjusted the colour detection parameters after each test which was shown to increase the detection accuracy. The orchard map was constructed based on tree trunk detection and consisted of the 2D positions of the individual trees and non-tree objects. The map of the individual trees was used as an a priority map for mobile robot localisation. A data fusion algorithm based on an Extended Kalman filter was used for pose estimation of the mobile robot in different paths (midway between rows, close to the rows and moving around trees in the row) and different turns (semi-circle and right angle turns) required for tree inspection tasks. The 2D positions of the individual trees were used in the correction step of the Extended Kalman filter to enhance localisation accuracy. Experimental tests were conducted in a simulated environment and a real orchard to evaluate the performance of the developed algorithms. The tree trunk detection algorithm was evaluated under two broad illumination conditions (sunny and cloudy). The algorithm was able to detect the tree trunks (regular and thin tree trunks) and discriminate between trees and non-tree objects with a detection accuracy of 97% showing that the fusion of both vision and 2D laser scanner technologies produced robust tree trunk detection. The mapping method successfully localised all the trees and non-tree objects of the tested tree rows in the orchard environment. The mapping results indicated that the constructed map can be reliably used for mobile robot localisation and navigation. The localisation algorithm was evaluated against the logged RTK-GPS positions for different paths and headland turns. The average of the RMS of the position error in x, y coordinates and Euclidean distance were 0.08 m, 0.07 m and 0.103 m respectively, whilst the average of the RMS of the heading error was 3:32°. These results were considered acceptable while driving along the rows and when executing headland turns for the target application of autonomous mobile robot navigation and tree inspection tasks in orchards

    Proceedings of the European Conference on Agricultural Engineering AgEng2021

    Get PDF
    This proceedings book results from the AgEng2021 Agricultural Engineering Conference under auspices of the European Society of Agricultural Engineers, held in an online format based on the University of Évora, Portugal, from 4 to 8 July 2021. This book contains the full papers of a selection of abstracts that were the base for the oral presentations and posters presented at the conference. Presentations were distributed in eleven thematic areas: Artificial Intelligence, data processing and management; Automation, robotics and sensor technology; Circular Economy; Education and Rural development; Energy and bioenergy; Integrated and sustainable Farming systems; New application technologies and mechanisation; Post-harvest technologies; Smart farming / Precision agriculture; Soil, land and water engineering; Sustainable production in Farm buildings

    Crop plant reconstruction and feature extraction based on 3-D vision

    Get PDF
    3-D imaging is increasingly affordable and offers new possibilities for a more efficient agricul-tural practice with the use of highly advances technological devices. Some reasons contrib-uting to this possibility include the continuous increase in computer processing power, the de-crease in cost and size of electronics, the increase in solid state illumination efficiency and the need for greater knowledge and care of the individual crops. The implementation of 3-D im-aging systems in agriculture is impeded by the economic justification of using expensive de-vices for producing relative low-cost seasonal products. However, this may no longer be true since low-cost 3-D sensors, such as the one used in this work, with advance technical capabili-ties are already available. The aim of this cumulative dissertation was to develop new methodologies to reconstruct the 3-D shape of agricultural environment in order to recognized and quantitatively describe struc-tures, in this case: maize plants, for agricultural applications such as plant breeding and preci-sion farming. To fulfil this aim a comprehensive review of the 3-D imaging systems in agricul-tural applications was done to select a sensor that was affordable and has not been fully inves-tigated in agricultural environments. A low-cost TOF sensor was selected to obtain 3-D data of maize plants and a new adaptive methodology was proposed for point cloud rigid registra-tion and stitching. The resulting maize 3-D point clouds were highly dense and generated in a cost-effective manner. The validation of the methodology showed that the plants were recon-structed with high accuracies and the qualitative analysis showed the visual variability of the plants depending on the 3-D perspective view. The generated point cloud was used to obtain information about the plant parameters (stem position and plant height) in order to quantita-tively describe the plant. The resulting plant stem positions were estimated with an average mean error and standard deviation of 27 mm and 14 mm, respectively. Additionally, meaning-ful information about the plant height profile was also provided, with an average overall mean error of 8.7 mm. Since the maize plants considered in this research were highly heterogeneous in height, some of them had folded leaves and were planted with standard deviations that emulate the real performance of a seeder; it can be said that the experimental maize setup was a difficult scenario. Therefore, a better performance, for both, plant stem position and height estimation could be expected for a maize field in better conditions. Finally, having a 3-D re-construction of the maize plants using a cost-effective sensor, mounted on a small electric-motor-driven robotic platform, means that the cost (either economic, energetic or time) of gen-erating every point in the point cloud is greatly reduced compared with previous researches.Die 3D-Bilderfassung ist zunehmend kostengĂŒnstiger geworden und bietet neue Möglichkeiten fĂŒr eine effizientere landwirtschaftliche Praxis durch den Einsatz hochentwickelter technologischer GerĂ€te. Einige GrĂŒnde, die diese ermöglichen, ist das kontinuierliche Wachstum der Computerrechenleistung, die Kostenreduktion und Miniaturisierung der Elektronik, die erhöhte Beleuchtungseffizienz und die Notwendigkeit einer besseren Kenntnis und Pflege der einzelnen Pflanzen. Die Implementierung von 3-D-Sensoren in der Landwirtschaft wird durch die wirtschaftliche Rechtfertigung der Verwendung teurer GerĂ€te zur Herstellung von kostengĂŒnstigen Saisonprodukten verhindert. Dies ist jedoch nicht mehr lĂ€nger der Fall, da kostengĂŒnstige 3-D-Sensoren, bereits verfĂŒgbar sind. Wie derjenige dier in dieser Arbeit verwendet wurde. Das Ziel dieser kumulativen Dissertation war, neue Methoden fĂŒr die Visualisierung die 3-D-Form der landwirtschaftlichen Umgebung zu entwickeln, um Strukturen quantitativ zu beschreiben: in diesem Fall Maispflanzen fĂŒr landwirtschaftliche Anwendungen wie PflanzenzĂŒchtung und Precision Farming zu erkennen. Damit dieses Ziel erreicht wird, wurde eine umfassende ÜberprĂŒfung der 3D-Bildgebungssysteme in landwirtschaftlichen Anwendungen durchgefĂŒhrt, um einen Sensor auszuwĂ€hlen, der erschwinglich und in landwirtschaftlichen Umgebungen noch nicht ausgiebig getestet wurde. Ein kostengĂŒnstiger TOF-Sensor wurde ausgewĂ€hlt, um 3-D-Daten von Maispflanzen zu erhalten und eine neue adaptive Methodik wurde fĂŒr die Ausrichtung von Punktwolken vorgeschlagen. Die resultierenden Mais-3-D-Punktwolken hatten eine hohe Punktedichte und waren in einer kosteneffektiven Weise erzeugt worden. Die Validierung der Methodik zeigte, dass die Pflanzen mit hoher Genauigkeit rekonstruiert wurden und die qualitative Analyse die visuelle VariabilitĂ€t der Pflanzen in AbhĂ€ngigkeit der 3-D-Perspektive zeigte. Die erzeugte Punktwolke wurde verwendet, um Informationen ĂŒber die Pflanzenparameter (Stammposition und Pflanzenhöhe) zu erhalten, die die Pflanze quantitativ beschreibt. Die resultierenden Pflanzenstammpositionen wurden mit einem durchschnittlichen mittleren Fehler und einer Standardabweichung von 27 mm bzw. 14 mm berechnet. ZusĂ€tzlich wurden aussagekrĂ€ftige Informationen zum Pflanzenhöhenprofil mit einem durchschnittlichen Gesamtfehler von 8,7 mm bereitgestellt. Da die untersuchten Maispflanzen in der Höhe sehr heterogen waren, hatten einige von ihnen gefaltete BlĂ€tter und wurden mit Standardabweichungen gepflanzt, die die tatsĂ€chliche Genauigkeit einer SĂ€maschine nachahmen. Man kann sagen, dass der experimentelle Versuch ein schwieriges Szenario war. Daher könnte fĂŒr ein Maisfeld unter besseren Bedingungen eine besseres Resultat sowohl fĂŒr die Pflanzenstammposition als auch fĂŒr die HöhenschĂ€tzung erwartet werden. Schließlich bedeutet eine 3D-Rekonstruktion der Maispflanzen mit einem kostengĂŒnstigen Sensor, der auf einer kleinen elektrischen, motorbetriebenen Roboterplattform montiert ist, dass die Kosten (entweder wirtschaftlich, energetisch oder zeitlich) fĂŒr die Erzeugung jedes Punktes in den Punktwolken im Vergleich zu frĂŒheren Untersuchungen stark reduziert werden
    • 

    corecore