441 research outputs found

    Rapid tree model reconstruction for fruit harvesting robot system based on binocular stereo vision

    Get PDF
    In this paper, the method of spatial information extraction of tree branch was studied. The region matching method was used to get the disparity map of stereo image, extracted feature points combining with branch skeleton image by multi-segment approximation method, and calculated the spatial coordinates and the radius of branch feature points by using binocular stereo vision. Real-time model reconstruction for fruit tree has been researched on. Test proposed that each branch module was constructed by 12-prism in the coordinate origin, and then rotated twice and translated once to get correct posture, finally combined with other modules for the fruit tree model. Test has optimized extraction algorithm and matching algorithm of the branch region, improved matching rate, reduced matching errors, avoided matching confusion, accurately extracted branch spatial information and improved the success rate of robot path planning for obstacle avoidance

    A proposal for automatic fruit harvesting by combining a low cost stereovision camera and a robotic arm

    Get PDF
    This paper proposes the development of an automatic fruit harvesting system by combining a low cost stereovision camera and a robotic arm placed in the gripper tool. The stereovision camera is used to estimate the size, distance and position of the fruits whereas the robotic arm is used to mechanically pickup the fruits. The low cost stereovision system has been tested in laboratory conditions with a reference small object, an apple and a pear at 10 different intermediate distances from the camera. The average distance error was from 4% to 5%, and the average diameter error was up to 30% in the case of a small object and in a range from 2% to 6% in the case of a pear and an apple. The stereovision system has been attached to the gripper tool in order to obtain relative distance, orientation and size of the fruit. The harvesting stage requires the initial fruit location, the computation of the inverse kinematics of the robotic arm in order to place the gripper tool in front of the fruit, and a final pickup approach by iteratively adjusting the vertical and horizontal position of the gripper tool in a closed visual loop. The complete system has been tested in controlled laboratory conditions with uniform illumination applied to the fruits. As a future work, this system will be tested and improved in conventional outdoor farming conditions

    Single-shot convolution neural networks for real-time fruit detection within the tree

    Get PDF
    Image/video processing for fruit detection in the tree using hard-coded feature extraction algorithms has shown high accuracy on fruit detection during recent years. While accurate, these approaches even with high-end hardware are still computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks architecture based on single-stage detectors. Using deep-learning techniques eliminates the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This architecture takes the input image and divides into AxA grid, where A is a configurable hyper-parameter that defines the fineness of the grid. To each grid cell an image detection and localization algorithm is applied. Each of those cells is responsible to predict bounding boxes and confidence score for fruit (apple and pear in the case of this study) detected in that cell. We want this confidence score to be high if a fruit exists in a cell, otherwise to be zero, if no fruit is in the cell. More than 100 images of apple and pear trees were taken. Each tree image with approximately 50 fruits, that at the end resulted on more than 5000 images of apple and pear fruits each. Labeling images for training consisted on manually specifying the bounding boxes for fruits, where (x, y) are the center coordinates of the box and (w, h) are width and height. This architecture showed an accuracy of more than 90% fruit detection. Based on correlation between number of visible fruits, detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Processing speed is higher than 20 FPS which is fast enough for any grasping/harvesting robotic arm or other real-time applications. HIGHLIGHTS: Using new convolutional deep learning techniques based on single-shot detectors to detect and count fruits (apple and pear) within the tree canopy

    Sensor development for estimation of biomass yield applied to Miscanthus Giganteus

    Get PDF
    Precision Agriculture technologies such as yield monitoring have been available for traditional field crops for decades. However, there are currently none available for energy crops such as Miscanthus Giganteus (MxG), switch grass, and sugar cane. The availability of yield monitors would allow better organization and scheduling of harvesting operations. In addition, the real-time yield data would allow adaptive speed control of a harvester to optimize performance. A yield monitor estimates a total amount of biomass per coverage area in kg/m2 as a function of location. However, for herbaceous type crops such as MxG and switchgrass, directly measuring the biomass entering a harvester in the field is complicated and impractical. Therefore, a novel yield monitoring system was proposed. The approach taken was to employ an indirect measure by determining a volume of biomass entering the harvester as a function of time. The volume can be obtained by multiplying the diameter related cross-sectional area, the height and the crop density of MxG. Subsequently, this volume is multiplied by an assumed constant, material density of the crop, which results in a mass flow per unit of time. To determine the coverage area, typically the width of the cutting device is multiplied by the machine speed to give the coverage area per unit of time. The ratio between the mass flow and coverage area is now the yield per area, and adding GPS geo-references the yield. To measure the height of MxG stems, a light detection and ranging (LIDAR) sensor based height measurement approach was developed. The LIDAR was applied to scan to the MxG vertically. Two measurement modes: static and dynamic, were designed and tested. A geometrical MxG height measurement model was developed and analyzed to obtain the resolution of the height measurement. An inclination correction method was proposed to correct errors caused by the uneven ground surface. The relationship between yield and stem height was discussed and analyzed, resulting in a linear relationship. To estimate the MxG stem diameter, two types of sensors were developed and evaluated. Firstly, a LIDAR based diameter sensor was designed and tested. The LIDAR was applied to scan MxG stems horizontally. A measurement geometry model of the LIDAR was developed to determine the region of interest. An angle continuity based pre-grouping algorithm was applied to group the raw data from the LIDAR. Based on the analysis of the presentation of MxG stems in the LIDAR data, a fuzzy clustering technique was developed to identify the MxG stems within the clusters. The diameter was estimated based on the clustering result. Four types of clustering techniques were compared. Based on their performances, the Gustafson - Kessel Clustering algorithm was selected. A drawback of the LIDAR based diameter sensor was that it could only be used for static diameter measurement. An alternative system based on a machine vision based diameter sensor, which supported the dynamic measurement, was applied. A binocular stereo vision based diameter sensor and a structured lighting-based monocular vision diameter estimation system were developed and evaluated in sequence. Both systems worked with structured lighting provided by a downward slanted laser sheet to provide detectable features in the images. An image segmentation based algorithm was developed to detect these features. These features were used to identify the MxG stems in both the binocular and monocular based systems. A horizontally covered length per pixel model was built and validated to extract the diameter information from images. The key difference between the binocular and monocular stereo vision systems was the approach to estimate the depth. For the binocular system, the depth information was obtained based on disparities of matched features in image pairs. The features were matched based on a pixel similarity in both one dimensional and two dimensional based image matching algorithm. In the monocular system, the depth was obtained by a geometry perspective model of the diameter sensor unit. The relationship between yield and stem diameter was discussed and analyzed. The result showed that the yield was more strongly dependent upon the stem height than diameter, and the relationship between yield and stem volume was linear. The crop density estimation was also based on the monocular stereo vision system. To predict the crop density, the geometry perspective model of the sensor unit was further analyzed to calculate the coverage area of the sensor. A Monte Carlo model based method was designed to predict the number of occluded MxG stems based on the number of visible MxG stems in images. The results indicated that the yield has a linear relationship with the number of stems with a zero intercept and the average individual mass as the coefficient. All sensors were evaluated in the field during the growing seasons of 2009, 2010 and 2011 using manually measured parameters (height, diameter and crop density) as references. The results showed that the LIDAR based height sensor achieved an accuracy of 92% (0.3m error) to 98.2% (0.06m error) in static height measurements and accuracy of 93.5% (0.22m error) to 98.5% (0.05m error) in dynamic height measurements. For the diameter measurements, the machine vision based sensors showed a more accurate result than the LIDAR based sensor. The binocular stereo vision based and monocular vision based diameter measurement achieved an accuracy of 93.1% and 93.5% for individual stem diameter estimation, and 99.8% and 99.9% for average stem diameter estimation, while the achieved accuracy of LIDAR based sensor for average stem diameter estimation was 92.5%. Among three stem diameter sensors, the monocular vision based sensor was recommended due to its higher accuracy and lower cost in both device and computation. The achieved accuracy of machine vision based crop density measurement was 92.2%

    Actuators and sensors for application in agricultural robots: A review

    Get PDF
    In recent years, with the rapid development of science and technology, agricultural robots have gradually begun to replace humans, to complete various agricultural operations, changing traditional agricultural production methods. Not only is the labor input reduced, but also the production efficiency can be improved, which invariably contributes to the development of smart agriculture. This paper reviews the core technologies used for agricultural robots in non-structural environments. In addition, we review the technological progress of drive systems, control strategies, end-effectors, robotic arms, environmental perception, and other related systems. This research shows that in a non-structured agricultural environment, using cameras and light detection and ranging (LiDAR), as well as ultrasonic and satellite navigation equipment, and by integrating sensing, transmission, control, and operation, different types of actuators can be innovatively designed and developed to drive the advance of agricultural robots, to meet the delicate and complex requirements of agricultural products as operational objects, such that better productivity and standardization of agriculture can be achieved. In summary, agricultural production is developing toward a data-driven, standardized, and unmanned approach, with smart agriculture supported by actuator-driven-based agricultural robots. This paper concludes with a summary of the main existing technologies and challenges in the development of actuators for applications in agricultural robots, and the outlook regarding the primary development directions of agricultural robots in the near future

    Fruit sizing using AI: A review of methods and challenges

    Get PDF
    Fruit size at harvest is an economically important variable for high-quality table fruit production in orchards and vineyards. In addition, knowing the number and size of the fruit on the tree is essential in the framework of precise production, harvest, and postharvest management. A prerequisite for analysis of fruit in a real-world environment is the detection and segmentation from background signal. In the last five years, deep learning convolutional neural network have become the standard method for automatic fruit detection, achieving F1-scores higher than 90 %, as well as real-time processing speeds. At the same time, different methods have been developed for, mainly, fruit size and, more rarely, fruit maturity estimation from 2D images and 3D point clouds. These sizing methods are focused on a few species like grape, apple, citrus, and mango, resulting in mean absolute error values of less than 4 mm in apple fruit. This review provides an overview of the most recent methodologies developed for in-field fruit detection/counting and sizing as well as few upcoming examples of maturity estimation. Challenges, such as sensor fusion, highly varying lighting conditions, occlusions in the canopy, shortage of public fruit datasets, and opportunities for research transfer, are discussed.This work was partly funded by the Department of Research and Universities of the Generalitat de Catalunya (grants 2017 SGR 646 and 2021 LLAV 00088) and by the Spanish Ministry of Science and Innovation / AEI/10.13039/501100011033 / FEDER (grants RTI2018-094222-B-I00 [PAgFRUIT project] and PID2021-126648OB-I00 [PAgPROTECT project]). The Secretariat of Universities and Research of the Department of Business and Knowledge of the Generalitat de Catalunya and European Social Fund (ESF) are also thanked for financing Juan Carlos Miranda’s pre-doctoral fellowship (2020 FI_B 00586). The work of Jordi Gené-Mola was supported by the Spanish Ministry of Universities through a Margarita Salas postdoctoral grant funded by the European Union - NextGenerationEU.info:eu-repo/semantics/publishedVersio

    Automatic plant features recognition using stereo vision for crop monitoring

    Get PDF
    Machine vision and robotic technologies have potential to accurately monitor plant parameters which reflect plant stress and water requirements, for use in farm management decisions. However, autonomous identification of individual plant leaves on a growing plant under natural conditions is a challenging task for vision-guided agricultural robots, due to the complexity of data relating to various stage of growth and ambient environmental conditions. There are numerous machine vision studies that are concerned with describing the shape of leaves that are individually-presented to a camera. The purpose of these studies is to identify plant species, or for the autonomous detection of multiple leaves from small seedlings under greenhouse conditions. Machine vision-based detection of individual leaves and challenges presented by overlapping leaves on a developed plant canopy using depth perception properties under natural outdoor conditions is yet to be reported. Stereo vision has recently emerged for use in a variety of agricultural applications and is expected to provide an accurate method for plant segmentation and identification which can benefit from depth properties and robustness. This thesis presents a plant leaf extraction algorithm using a stereo vision sensor. This algorithm is used on multiple leaf segmentation and overlapping leaves separation using a combination of image features, specifically colour, shape and depth. The separation between the connected and the overlapping leaves relies on the measurement of the discontinuity in depth gradient for the disparity maps. Two techniques have been developed to implement this task based on global and local measurement. A geometrical plane from each segmented leaf can be extracted and used to parameterise a 3D model of the plant image and to measure the inclination angle of each individual leaf. The stem and branch segmentation and counting method was developed based on the vesselness measure and Hough transform technique. Furthermore, a method for reconstructing the segmented parts of hibiscus plants is presented and a 2.5D model is generated for the plant. Experimental tests were conducted with two different selected plants: cotton of different sizes, and hibiscus, in an outdoor environment under varying light conditions. The proposed algorithm was evaluated using 272 cotton and hibiscus plant images. The results show an observed enhancement in leaf detection when utilising depth features, where many leaves in various positions and shapes (single, touching and overlapping) were detected successfully. Depth properties were more effective in separating between occluded and overlapping leaves with a high separation rate of 84% and these can be detected automatically without adding any artificial tags on the leaf boundaries. The results exhibit an acceptable segmentation rate of 78% for individual plant leaves thereby differentiating the leaves from their complex backgrounds and from each other. The results present almost identical performance for both species under various lighting and environmental conditions. For the stem and branch detection algorithm, experimental tests were conducted on 64 colour images of both species under different environmental conditions. The results show higher stem and branch segmentation rates for hibiscus indoor images (82%) compared to hibiscus outdoor images (49.5%) and cotton images (21%). The segmentation and counting of plant features could provide accurate estimation about plant growth parameters which can be beneficial for many agricultural tasks and applications

    Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control

    Full text link
    This paper provides an overview of the current state-of-the-art in selective harvesting robots (SHRs) and their potential for addressing the challenges of global food production. SHRs have the potential to increase productivity, reduce labour costs, and minimise food waste by selectively harvesting only ripe fruits and vegetables. The paper discusses the main components of SHRs, including perception, grasping, cutting, motion planning, and control. It also highlights the challenges in developing SHR technologies, particularly in the areas of robot design, motion planning and control. The paper also discusses the potential benefits of integrating AI and soft robots and data-driven methods to enhance the performance and robustness of SHR systems. Finally, the paper identifies several open research questions in the field and highlights the need for further research and development efforts to advance SHR technologies to meet the challenges of global food production. Overall, this paper provides a starting point for researchers and practitioners interested in developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic

    A review on the application of computer vision and machine learning in the tea industry

    Get PDF
    Tea is rich in polyphenols, vitamins, and protein, which is good for health and tastes great. As a result, tea is very popular and has become the second most popular beverage in the world after water. For this reason, it is essential to improve the yield and quality of tea. In this paper, we review the application of computer vision and machine learning in the tea industry in the last decade, covering three crucial stages: cultivation, harvesting, and processing of tea. We found that many advanced artificial intelligence algorithms and sensor technologies have been used in tea, resulting in some vision-based tea harvesting equipment and disease detection methods. However, these applications focus on the identification of tea buds, the detection of several common diseases, and the classification of tea products. Clearly, the current applications have limitations and are insufficient for the intelligent and sustainable development of the tea field. The current fruitful developments in technologies related to UAVs, vision navigation, soft robotics, and sensors have the potential to provide new opportunities for vision-based tea harvesting machines, intelligent tea garden management, and multimodal-based tea processing monitoring. Therefore, research and development combining computer vision and machine learning is undoubtedly a future trend in the tea industry
    corecore