37 research outputs found

    Real-time Stereo Visual Servoing for Rose Pruning with Robotic Arm

    Get PDF
    The paper presents a working pipeline which integrates hardware and software in an automated robotic rose cutter. To the best of our knowledge, this is the first robot able to prune rose bushes in a natural environment. Unlike similar approaches like tree stem cutting, the proposed method does not require to scan the full plant, have multiple cameras around the bush, or assume that a stem does not move. It relies on a single stereo camera mounted on the end-effector of the robot and real-time visual servoing to navigate to the desired cutting location on the stem. The evaluation of the whole pipeline shows a good performance in a garden with unconstrained conditions, where finding and approaching a specific location on a stem is challenging due to occlusions caused by other stems and dynamic changes caused by the win

    Looking behind occlusions: A study on amodal segmentation for robust on-tree apple fruit size estimation

    Get PDF
    The detection and sizing of fruits with computer vision methods is of interest because it provides relevant information to improve the management of orchard farming. However, the presence of partially occluded fruits limits the performance of existing methods, making reliable fruit sizing a challenging task. While previous fruit segmentation works limit segmentation to the visible region of fruits (known as modal segmentation), in this work we propose an amodal segmentation algorithm to predict the complete shape, which includes its visible and occluded regions. To do so, an end-to-end convolutional neural network (CNN) for simultaneous modal and amodal instance segmentation was implemented. The predicted amodal masks were used to estimate the fruit diameters in pixels. Modal masks were used to identify the visible region and measure the distance between the apples and the camera using the depth image. Finally, the fruit diameters in millimetres (mm) were computed by applying the pinhole camera model. The method was developed with a Fuji apple dataset consisting of 3925 RGB-D images acquired at different growth stages with a total of 15,335 annotated apples, and was subsequently tested in a case study to measure the diameter of Elstar apples at different growth stages. Fruit detection results showed an F1-score of 0.86 and the fruit diameter results reported a mean absolute error (MAE) of 4.5 mm and R2 = 0.80 irrespective of fruit visibility. Besides the diameter estimation, modal and amodal masks were used to automatically determine the percentage of visibility of measured apples. This feature was used as a confidence value, improving the diameter estimation to MAE = 2.93 mm and R2 = 0.91 when limiting the size estimation to fruits detected with a visibility higher than 60%. The main advantages of the present methodology are its robustness for measuring partially occluded fruits and the capability to determine the visibility percentage. The main limitation is that depth images were generated by means of photogrammetry methods, which limits the efficiency of data acquisition. To overcome this limitation, future works should consider the use of commercial RGB-D sensors. The code and the dataset used to evaluate the method have been made publicly available at https://github.com/GRAP-UdL-AT/Amodal_Fruit_SizingThis work was partly funded by the Departament de Recerca i Universitats de la Generalitat de Catalunya (grant 2021 LLAV 00088), the Spanish Ministry of Science, Innovation and Universities (grants RTI2018-094222-B-I00 [PAgFRUIT project], PID2021-126648OB-I00 [PAgPROTECT project] and PID2020-117142GB-I00 [DeeLight project] by MCIN/AEI/10.13039/501100011033 and by “ERDF, a way of making Europe”, by the European Union). The work of Jordi Gené Mola was supported by the Spanish Ministry of Universities through a Margarita Salas postdoctoral grant funded by the European Union - NextGenerationEU. We would also like to thank Nufri (especially Santiago Salamero and Oriol Morreres) for their support during data acquisition, and Pieter van Dalfsen and Dirk de Hoog from Wageningen University & Research for additional data collection used in the case study.info:eu-repo/semantics/publishedVersio

    TrimBot2020: an outdoor robot for automatic gardening

    Get PDF
    Robots are increasingly present in modern industry and also in everyday life. Their applications range from health-related situations, for assistance to elderly people or in surgical operations, to automatic and driver-less vehicles (on wheels or flying) or for driving assistance. Recently, an interest towards robotics applied in agriculture and gardening has arisen, with applications to automatic seeding and cropping or to plant disease control, etc. Autonomous lawn mowers are succesful market applications of gardening robotics. In this paper, we present a novel robot that is developed within the TrimBot2020 project, funded by the EU H2020 program. The project aims at prototyping the first outdoor robot for automatic bush trimming and rose pruning.Comment: Accepted for publication at International Sympsium on Robotics 201

    Design of an eye-in-hand sensing and servo control framework for harvesting robotics in dense vegetation

    Get PDF
    A modular software framework design that allows flexible implementation of eye-in-hand sensing and motion control for agricultural robotics in dense vegetation is reported. Harvesting robots in cultivars with dense vegetation require multiple viewpoints and on-line trajectory adjustments in order to reduce the amount of false negatives and correct for fruit movement. In contrast to specialised software, the framework proposed aims to support a wide variety of agricultural use cases, hardware and extensions. A set of Robotic Operating System (ROS) nodes was created to ensure modularity and separation of concerns, implementing functionalities for application control, robot motion control, image acquisition, fruit detection, visual servo control and simultaneous localisation and mapping (SLAM) for monocular relative depth estimation and scene reconstruction. Coordination functionality was implemented by the application control node with a finite state machine. In order to provide visual servo control and simultaneous localisation and mapping functionalities, off-the-shelf libraries Visual Servoing Platform library (ViSP) and Large Scale Direct SLAM (LSD-SLAM) were wrapped in ROS nodes. The capabilities of the framework are demonstrated by an example implementation for use with a sweet-pepper crop, combined with hardware consisting of a Baxter robot and a colour camera placed on its end-effector. Qualitative tests were performed under laboratory conditions using an artificial dense vegetation sweet-pepper crop. Results indicated the framework can be implemented for sensing and robot motion control in sweet-pepper using visual information from the end-effector. Future research to apply the framework to other use-cases and validate the performance of its components in servo applications under real greenhouse conditions is suggested

    Precisietechnologie Tuinbouw: PPS Autonoom onkruid verwijderen : Eindrapportage

    No full text
    Work package 2 of the Precision Technology Horticulture program focuses on autonomous weed removal. This final report contains four deliverables: D2.1 Module for recognition of red lettuce, D2.2 Vision based crop row guidance module, D2.3 Machine for hoeing more than 8 crop rows simultaneously, and D2.6 Actuator for controlling weeds in full field crops. D2.1 reports on the software for an extra colour segmentation algorithm that has been added to the Steketee IC-cultivator. With this algorithm it is possible to adequately detect and distinguish non green plants, such as red lettuce, from weeds. For D2.2 a standalone module for crop row guidance for hoeing between the row has been developed. D2.3 describes is the extension in hardware and software that makes it possible to hoe up to 24 crop rows simultaneously. For D2.6, research has been conducted into different robotic arms to move the end effector to the right spot for weed control in full field crops. Different arms are compared and the maximum possible driving speed was be calculated. A test device for full field weed control was build based on a x-z position unit. The chapter publications and media provides an overview of the dissemination of the project results throughout the duration of the project

    Angle estimation between plant parts for grasp optimisation in harvest robots

    No full text
    For many robotic harvesting applications, position and angle between plant parts is required to optimally position the end-effector before attempting to approach, grasp and cut the product. A method for estimating the angle between plant parts, e.g. stem and fruit, is presented to support the optimisation of grasp pose for harvest robots. The hypothesis is that from colour images, this angle in the horizontal plane can be accurately derived under unmodified greenhouse conditions. It was hypothesised that the location of a fruit and stem could be inferred in the image plane from sparse semantic segmentations. The paper focussed on 4 sub-tasks for a sweet-pepper harvesting robot. Each task was evaluated under 3 conditions: laboratory, simplified greenhouse and unmodified greenhouse. The requirements for each task were based on the end-effector design that required a 25° positioning accuracy. In Task I, colour image segmentation for classes back-ground, fruit and stem plus wire was performed, meeting the requirement of an intersection-over-union > 0.58. In Task II, the stem pose was estimated from the segmentations. In Task III, centres of the fruit and stem were estimated from the output of previous tasks. Both centre estimations In Tasks II and III met the requirement of 25 pixel accuracy on average. In Task IV, the centres were used to estimate the angle between the fruit and stem, meeting the accuracy requirement of 25° for 73% of the cases. The work impacted on the harvest performance by increasing its success rate from 14% theoretically to 52% in practice under unmodified conditions.</p
    corecore