26 research outputs found

    Plants Detection, Localization and Discrimination using 3D Machine Vision for Robotic Intra-row Weed Control

    Get PDF
    Weed management is vitally important in crop production systems. However, conventional herbicide-based weed control can lead to negative environmental impacts. Manual weed control is laborious and impractical for large scale production. Robotic weeding offers a possibility of controlling weeds precisely, particularly for weeds growing close to or within crop rows. The fusion of two-dimensional textural images and three-dimensional spatial images to recognize and localize crop plants at different growth stages were investigated. Images of different crop plants at different growth stages with weeds were acquired. Feature extraction algorithms were developed, and different features were extracted and used to train plant and background classifiers, which also addressed the problems of canopy occlusion and leaf damage. Then, the efficacy and accuracy of the proposed methods in classification were demonstrated by experiments. Currently, the algorithms were only developed and tested for broccoli and lettuce. For broccoli plants, the crop plants detection true positive rate was 93.1%, and the false discover rate was 1.1%, with the average crop-plant-localization error of 15.9 mm. For lettuce plants, the crop plants detection true positive rate was 92.3%, and the false discover rate was 4.0%, with the average crop-plant-localization error of 8.5 mm. The results have shown that 3D imaging based plant recognition algorithms are effective and reliable for crop/weed differentiation

    The use of agricultural robots in weed management and control

    Get PDF
    Weed management and control are essential for the production of high-yielding and high-quality crops, and advances in weed control technology have had a huge impact on agricultural productivity. Any effective weed control technology needs to be both robust and adaptable. Robust weed control technology will successfully control weeds in spite of variability in the field conditions. Adaptable weed control technology has the capacity to change its strategy in the context of evolving weed populations, genetics, and climatic conditions. This chapter focuses on key work in the development of robotic weeders, including weed perception systems and weed control mechanisms. Following an extensive introduction, the chapter addresses the challenges of robotic weed control focusing on both perception systems, which can detect and classify weed plants from crop plants, and also weed control mechanisms, covering both chemical and mechanical weed control. A case study of an automated weeding system is provided

    Plant Recognition through the Fusion of 2D and 3D Images for Robotic Weeding

    Get PDF
    In crop production systems, weed management is vitally important. But both manual weeding and herbicide-based weed controlling are problematic due to concerns in cost, operator health, emergence of herbicide-resistant weed species, and environment impact. Automated robotic weeding offers a possibility of controlling weeds in a precise fashion, particularly for weeds growing near crops or within crop rows. However, identification and localization of plants have not yet been fully automated. The goal of this reported project is to develop a high-throughput plant recognition and localization algorithm by fusing 2D color and textural data with 3D point cloud data. Plant morphological models were developed and applied for plant recognition against different weed species at different growth stages

    Plant Localization and Discrimination using 2D+3D Computer Vision for Robotic Intra-row Weed Control

    Get PDF
    Weed management is vitally important in crop production systems. However, conventional herbicide based weed control can lead to negative environmental impacts. Manual weed control is laborious and impractical for large scale production. Robotic weed control offers a possibility of controlling weeds precisely, particularly for weeds growing near or within crop rows. A computer vision system was developed based on Kinect V2 sensor, using the fusion of two-dimensional textural data and three-dimensional spatial data to recognize and localized crop plants different growth stages. Images were acquired of different plant species such as broccoli, lettuce and corn at different growth stages. A database system was developed to organize these images. Several feature extraction algorithms were developed which addressed the problems of canopy occlusion and damaged leaves. With our proposed algorithms, different features were extracted and used to train plant and background classifiers. Finally, the efficiency and accuracy of the proposed classification methods were demonstrated and validated by experiments

    Development of a Mobile Robotic Phenotyping System for Growth Chamber-based Studies of Genotype x Environment Interactions

    Get PDF
    To increase understanding of the interaction between phenotype and genotype x environment to improve crop performance, large amounts of phenotypic data are needed. Studying plants of a given strain under multiple environments can greatly help to reveal their interactions. To collect the labor-intensive data required to perform experiments in this area, a Mecanum-wheeled, magnetic-tape-following indoor rover has been developed to accurately and autonomously move between and inside growth chambers. Integration of the motor controllers, a robot arm, and a Microsoft Kinect (v2) 3D sensor was achieved in a customized C++ program. Detecting and segmenting plants in a multi-plant environment is a challenging task, which can be aided by integration of depth data into these algorithms. Image-processing functions were implemented to filter the depth image to minimize noise and remove undesired surfaces, reducing the memory requirement and allowing the plant to be reconstructed at a higher resolution in real-time. Three-dimensional meshes representing plants inside the chamber were reconstructed using the Kinect SDK’s KinectFusion. After transforming user-selected points in camera coordinates to robot-arm coordinates, the robot arm is used in conjunction with the rover to probe desired leaves, simulating the future use of sensors such as a fluorimeter and Raman spectrometer. This paper reports the system architecture and some preliminary results of the system

    Techno-economic analysis of future precision field robots

    Get PDF
    Precision agriculture (PA) technology provides a means to increase equipment productivity and field efficiency, and input efficiency; however, the potential of PA technologies to enable smaller, autonomous machines has yet not been realized in the market place. In developed countries, the size of tractors and implements continue to increase. Such trend cannot continue indefinitely because of size, technical and cost constraints. A long operating life for agricultural equipment enables a greater benefit relative to the high initial cost and investment. However, long equipment life can lead to technologically obsolete machines with potential incompatibility and sub-optimality, since machinery and PA technology should evolve together and be used as a package. Similarly, power system technologies with potential application in agricultural machines are also evolving quickly and issues of renewability and sustainability are becoming common priorities, with demands for standardization and certification. The concept of small modular and scalable intelligent machines tries to address the challenge of gaining higher productivity with reduced costs and power. In particular, in this paper different weeding technologies were compared using performance metrics including work rate and energy density. Conventional processes, using common tractors were compared with robotic weeder designs to evaluate performance, size and energy requirements. Forecasts of possible future trends of agricultural machine size, PA technology integration and power system technology deployment were derived from this work

    Automated crop plant detection based on the fusion of color and depth images for robotic weed control

    Get PDF
    Robotic weeding enables weed control near or within crop rows automatically, precisely and effectively. A computer‐vision system was developed for detecting crop plants at different growth stages for robotic weed control. Fusion of color images and depth images was investigated as a means of enhancing the detection accuracy of crop plants under conditions of high weed population. In‐field images of broccoli and lettuce were acquired 3–27 days after transplanting with a Kinect v2 sensor. The image processing pipeline included data preprocessing, vegetation pixel segmentation, plant extraction, feature extraction, feature‐based localization refinement, and crop plant classification. For the detection of broccoli and lettuce, the color‐depth fusion algorithm produced high true‐positive detection rates (91.7% and 90.8%, respectively) and low average false discovery rates (1.1% and 4.0%, respectively). Mean absolute localization errors of the crop plant stems were 26.8 and 7.4 mm for broccoli and lettuce, respectively. The fusion of color and depth was proved beneficial to the segmentation of crop plants from background, which improved the average segmentation success rates from 87.2% (depth‐based) and 76.4% (color‐based) to 96.6% for broccoli, and from 74.2% (depth‐based) and 81.2% (color‐based) to 92.4% for lettuce, respectively. The fusion‐based algorithm had reduced performance in detecting crop plants at early growth stages

    Navigation control of a robotic vehicle for field-based phenotyping

    Get PDF
    Field-based phenotyping heavily relies on infield manual measurement, which is labor-intensive, repetitive, and time-consuming. With the rapid advancements of robotic technology, automated in-field phenotyping technologies can significantly increase data throughput and reduce labor demand. A robotic mobile platform PhenoBot 3.0 was designed by our research group to traverse between crop rows and acquire phenotypic data automatically. However, the field-based navigation control is a critical and challenging task due to the complex and unstructured/semi-structured environment in the field. This dissertation documents our investigation of a field-based navigation control system for an agricultural field robotic vehicle. Different functional modules were developed and implemented for the system, including the motion control module based on robot kinetic model, the robot localization module using a single RTK-GPS receiver, the path tracking module running different tracking algorithms, and the computer vision-based row mapping and in-field localization module using different sensor setups. Path tracking based on GPS localization is the most common navigation strategy for agricultural robotic vehicles. Three specific path tracking algorithms including Linear-Quadratic Regulator (LQR), Pure Pursuit control (PPC) and Timed Elastic Band (TEB) were implemented. The performance of the proposed navigation control systems were assessed on our PhenoBot 3.0 platform under both simulated and real field conditions. Satisfactory accuracies in terms of the mean absolute tracking error (MATE) were achieved while running the LQR controller on our proposed navigation control system in both simulation and field tests. The results showed the proposed navigation control system is capable of guiding the PhenoBot 3.0 robot to follow predefined paths to traverse between crop rows on uneven terrain. For situations where global localization is denied or a pre-defined path is not available, computer vision was applied to detect the crop rows in order to locate the robot, create field maps, and navigate the robot through row-guidance. A vision-based system using a Time-of-Flight (ToF) camera was developed for under-canopy navigation, specifically for crop row mapping and robot localization under canopies of the crop rows. The potential and limitations of using ToF cameras for under-canopy navigation were investigated through field tests. Since the agronomically-spaced crop rows are well-constructed in parallel and lend to unique features in the frequency domain, Discrete Fourier Transform (DFT) can be potentially used to solve crop row detection problems of robot navigation in agriculture. A novel image processing pipeline was developed to detect crop rows from top-view color images using frequency domain analysis. A Linear Quadratic Gaussian (LQG) controller was used with the proposed algorithm for robot navigation between crop rows. The field tests showed that the proposed crop row detection algorithm was capable of detecting crop rows with plants at different growth stages and under variable illumination conditions; and the algorithm is feasible for navigation control using a row-tracking strategy

    Plants Detection, Localization and Discrimination using 3D Machine Vision for Robotic Intra-row Weed Control

    No full text
    Weed management is vitally important in crop production systems. However, conventional herbicide-based weed control can lead to negative environmental impacts. Manual weed control is laborious and impractical for large scale production. Robotic weeding offers a possibility of controlling weeds precisely, particularly for weeds growing close to or within crop rows. The fusion of two-dimensional textural images and three-dimensional spatial images to recognize and localize crop plants at different growth stages were investigated. Images of different crop plants at different growth stages with weeds were acquired. Feature extraction algorithms were developed, and different features were extracted and used to train plant and background classifiers, which also addressed the problems of canopy occlusion and leaf damage. Then, the efficacy and accuracy of the proposed methods in classification were demonstrated by experiments. Currently, the algorithms were only developed and tested for broccoli and lettuce. For broccoli plants, the crop plants detection true positive rate was 93.1%, and the false discover rate was 1.1%, with the average crop-plant-localization error of 15.9 mm. For lettuce plants, the crop plants detection true positive rate was 92.3%, and the false discover rate was 4.0%, with the average crop-plant-localization error of 8.5 mm. The results have shown that 3D imaging based plant recognition algorithms are effective and reliable for crop/weed differentiation.</p

    Navigation control of a robotic vehicle for field-based phenotyping

    No full text
    Field-based phenotyping heavily relies on infield manual measurement, which is labor-intensive, repetitive, and time-consuming. With the rapid advancements of robotic technology, automated in-field phenotyping technologies can significantly increase data throughput and reduce labor demand. A robotic mobile platform PhenoBot 3.0 was designed by our research group to traverse between crop rows and acquire phenotypic data automatically. However, the field-based navigation control is a critical and challenging task due to the complex and unstructured/semi-structured environment in the field. This dissertation documents our investigation of a field-based navigation control system for an agricultural field robotic vehicle. Different functional modules were developed and implemented for the system, including the motion control module based on robot kinetic model, the robot localization module using a single RTK-GPS receiver, the path tracking module running different tracking algorithms, and the computer vision-based row mapping and in-field localization module using different sensor setups. Path tracking based on GPS localization is the most common navigation strategy for agricultural robotic vehicles. Three specific path tracking algorithms including Linear-Quadratic Regulator (LQR), Pure Pursuit control (PPC) and Timed Elastic Band (TEB) were implemented. The performance of the proposed navigation control systems were assessed on our PhenoBot 3.0 platform under both simulated and real field conditions. Satisfactory accuracies in terms of the mean absolute tracking error (MATE) were achieved while running the LQR controller on our proposed navigation control system in both simulation and field tests. The results showed the proposed navigation control system is capable of guiding the PhenoBot 3.0 robot to follow predefined paths to traverse between crop rows on uneven terrain. For situations where global localization is denied or a pre-defined path is not available, computer vision was applied to detect the crop rows in order to locate the robot, create field maps, and navigate the robot through row-guidance. A vision-based system using a Time-of-Flight (ToF) camera was developed for under-canopy navigation, specifically for crop row mapping and robot localization under canopies of the crop rows. The potential and limitations of using ToF cameras for under-canopy navigation were investigated through field tests. Since the agronomically-spaced crop rows are well-constructed in parallel and lend to unique features in the frequency domain, Discrete Fourier Transform (DFT) can be potentially used to solve crop row detection problems of robot navigation in agriculture. A novel image processing pipeline was developed to detect crop rows from top-view color images using frequency domain analysis. A Linear Quadratic Gaussian (LQG) controller was used with the proposed algorithm for robot navigation between crop rows. The field tests showed that the proposed crop row detection algorithm was capable of detecting crop rows with plants at different growth stages and under variable illumination conditions; and the algorithm is feasible for navigation control using a row-tracking strategy.</p
    corecore