52 research outputs found

    3D-vision based detection, localization, and sizing of broccoli heads in the field

    Get PDF
    This paper describes a 3D vision system for robotic harvesting of broccoli using low-cost RGB-D sensors, which was developed and evaluated using sensory data collected under real-world field conditions in both the UK and Spain. The presented method addresses the tasks of detecting mature broccoli heads in the field and providing their 3D locations relative to the vehicle. The paper evaluates different 3D features, machine learning, and temporal filtering methods for detection of broccoli heads. Our experiments show that a combination of Viewpoint Feature Histograms, Support Vector Machine classifier, and a temporal filter to track the detected heads results in a system that detects broccoli heads with high precision. We also show that the temporal filtering can be used to generate a 3D map of the broccoli head positions in the field. Additionally, we present methods for automatically estimating the size of the broccoli heads, to determine when a head is ready for harvest. All of the methods were evaluated using ground-truth data from both the UK and Spain, which we also make available to the research community for subsequent algorithm development and result comparison. Cross-validation of the system trained on the UK dataset on the Spanish dataset, and vice versa, indicated good generalization capabilities of the system, confirming the strong potential of low-cost 3D imaging for commercial broccoli harvesting

    MODEL BASED 3D POINT CLOUD SEGMENTATION FOR AUTOMATED SELECTIVE BROCCOLI HARVESTING

    Get PDF
    Segmentation of 3D objects in cluttered scenes is a highly relevant problem. Given a 3D point cloud produced by a depth sensor, the goal is to separate objects of interest in the foreground from other elements in the background. We research 3D imaging methods to accurately segment and identify broccoli plants in the field. The ability to separate parts into different sets of sensor readings is an important task towards this goal. Our research is focused on the broccoli head segmentation problem as a first step towards size estimation of each broccoli crop in order to establish whether or not it is suitable for cutting

    Segmentation and detection from organised 3D point clouds: a case study in broccoli head detection

    Get PDF
    Autonomous harvesting is becoming an important challenge and necessity in agriculture, because of the lack of labour and the growth of population needing to be fed. Perception is a key aspect of autonomous harvesting and is very challenging due to difficult lighting conditions, limited sensing technologies, occlusions, plant growth, etc. 3D vision approaches can bring several benefits addressing the aforementioned challenges such as localisation, size estimation, occlusion handling and shape analysis. In this paper, we propose a novel approach using 3D information for detecting broccoli heads based on Convolutional Neural Networks (CNNs), exploiting the organised nature of the point clouds originating from the RGBD sensors. The proposed algorithm, tested on real-world datasets, achieves better performances than the state-of-the-art, with better accuracy and generalisation in unseen scenarios, whilst significantly reducing inference time, making it better suited for real-time in-field applications

    Accurate Crop Spraying with RTK and Machine Learning on an Autonomous Field Robot

    Full text link
    The agriculture sector requires a lot of labor and resources. Hence, farmers are constantly being pressed for technology and automation to be cost-effective. In this context, autonomous robots can play a very important role in carrying out agricultural tasks such as spraying, sowing, inspection, and even harvesting. This paper presents one such autonomous robot that is able to identify plants and spray agro-chemicals precisely. The robot uses machine vision technologies to find plants and RTK-GPS technology to navigate the robot along a predetermined path. The experiments were conducted in a field of potted plants in which successful results have been obtained.Comment: 7 pages, 12 figures, Journa

    Proof-of-concept modular robot platform for cauliflower harvesting

    Get PDF
    This paper presents a proof-of-concept platform for demonstrating robotic harvesting of summer-varieties of cauliflower, and early tests performed under laboratory conditions. The platform is designed to be modular and has two dexterous robotic arms with variable-stiffness technology. The bi-manual configuration enables the separation of grasping and cutting behaviours into separate robot manipulators. By exploiting the passive compliance of the variable-stiffness arms, the system can operate with both grasping and cutting tool close to the ground. Multiple 3D vision cameras are used to track the cauliflowers in real-time, and to attempt to assess the maturity. Early experiments with the platform in the laboratory highlight the potential and challenges of the platform

    Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture

    Get PDF
    Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds in order to perform selective treatments, increase yield and crop health while reducing the amount of chemicals used. Deep learning approaches have recently achieved both excellent classification performance and real-time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labelling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep-learning-based classifiers for different crop types, with the goal of reducing the retraining time and labelling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds, and compare the performance and retraining efforts required when using data labelled at pixel level with partially labelled data obtained through a less time-consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible, and reduces training times for up to 80%80\%. Furthermore, we show that even when the data used for re-training is imperfectly annotated, the classification performance is within 2%2\% of that of networks trained with laboriously annotated pixel-precision data

    Robot for harvesting cauliflower, and the cutting of cauliflowers

    Get PDF
    The human arm bounded by soft tissues and muscles is capable of fast movement with high precision fidelity. And it is soft as it has muscles and other tissue. Nowadays, robots can do many human tasks. Robot arms are also becoming softer, to make them stronger and safer to use around humans while working in real-world environments. In this study, the focus is on a robot for harvesting cauliflower, and the cutting of cauliflowers in particular. The robotic platform is designed to reuse modular robotic components from other crops and/or different cauliflower varieties. The platform has two robot arms with variable-stiffness technology. The first arm is for cutting the cauliflower in its steam. And the second is for picking the cauliflower. The GummiArm is a 7+1 DOF robot arm and is an open-source project. Here it is used with a cauliflower-specific end effector, which is a cutter designed with 3d printer, while the second arm has a gripping end-effector. The bi-manual configuration allows the separation of grasping and cutting behaviours into separate robot manipulators, enabling flexibility to adapt to different varieties. Here the focus was on the cutting, and on the control of these through the Robot Operating System (ROS). Several experiments were performed with a force-Analysis of the cutting behaviour, during teleoperation, and when using a control exploiting the passive compliance of the GummiArm. These early experiments with the laboratory platform demonstrate the platform's promise, but also a set of challenges to tackle. This new data can be used to compare human labour performance, develop operational concepts and business plans, and drive future design decisions.self fund studen
    • …
    corecore