1,769 research outputs found

    Automatic Model Based Dataset Generation for Fast and Accurate Crop and Weeds Detection

    Full text link
    Selective weeding is one of the key challenges in the field of agriculture robotics. To accomplish this task, a farm robot should be able to accurately detect plants and to distinguish them between crop and weeds. Most of the promising state-of-the-art approaches make use of appearance-based models trained on large annotated datasets. Unfortunately, creating large agricultural datasets with pixel-level annotations is an extremely time consuming task, actually penalizing the usage of data-driven techniques. In this paper, we face this problem by proposing a novel and effective approach that aims to dramatically minimize the human intervention needed to train the detection and classification algorithms. The idea is to procedurally generate large synthetic training datasets randomizing the key features of the target environment (i.e., crop and weed species, type of soil, light conditions). More specifically, by tuning these model parameters, and exploiting a few real-world textures, it is possible to render a large amount of realistic views of an artificial agricultural scenario with no effort. The generated data can be directly used to train the model or to supplement real-world images. We validate the proposed methodology by using as testbed a modern deep learning based image segmentation architecture. We compare the classification results obtained using both real and synthetic images as training data. The reported results confirm the effectiveness and the potentiality of our approach.Comment: To appear in IEEE/RSJ IROS 201

    Real-time Semantic Segmentation of Crop and Weed for Precision Agriculture Robots Leveraging Background Knowledge in CNNs

    Full text link
    Precision farming robots, which target to reduce the amount of herbicides that need to be brought out in the fields, must have the ability to identify crops and weeds in real time to trigger weeding actions. In this paper, we address the problem of CNN-based semantic segmentation of crop fields separating sugar beet plants, weeds, and background solely based on RGB data. We propose a CNN that exploits existing vegetation indexes and provides a classification in real time. Furthermore, it can be effectively re-trained to so far unseen fields with a comparably small amount of training data. We implemented and thoroughly evaluated our system on a real agricultural robot operating in different fields in Germany and Switzerland. The results show that our system generalizes well, can operate at around 20Hz, and is suitable for online operation in the fields.Comment: Accepted for publication at IEEE International Conference on Robotics and Automation 2018 (ICRA 2018

    Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture

    Get PDF
    Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds in order to perform selective treatments, increase yield and crop health while reducing the amount of chemicals used. Deep learning approaches have recently achieved both excellent classification performance and real-time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labelling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep-learning-based classifiers for different crop types, with the goal of reducing the retraining time and labelling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds, and compare the performance and retraining efforts required when using data labelled at pixel level with partially labelled data obtained through a less time-consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible, and reduces training times for up to 80%80\%. Furthermore, we show that even when the data used for re-training is imperfectly annotated, the classification performance is within 2%2\% of that of networks trained with laboriously annotated pixel-precision data

    Precision Weed Management Based on UAS Image Streams, Machine Learning, and PWM Sprayers

    Get PDF
    Weed populations in agricultural production fields are often scattered and unevenly distributed; however, herbicides are broadcast across fields evenly. Although effective, in the case of post-emergent herbicides, exceedingly more pesticides are used than necessary. A novel weed detection and control workflow was evaluated targeting Palmer amaranth in soybean (Glycine max) fields. High spatial resolution (0.4 cm) unmanned aircraft system (UAS) image streams were collected, annotated, and used to train 16 object detection convolutional neural networks (CNNs; RetinaNet, Faster R-CNN, Single Shot Detector, and YOLO v3) each trained on imagery with 0.4, 0.6, 0.8, and 1.2 cm spatial resolutions. Models were evaluated on imagery from four production fields containing approximately 7,800 weeds. The highest performing model was Faster R-CNN trained on 0.4 cm imagery (precision = 0.86, recall = 0.98, and F1-score = 0.91). A site-specific workflow leveraging the highest performing trained CNN models was evaluated in replicated field trials. Weed control (%) was compared between a broadcast treatment and the proposed site-specific workflow which was applied using a pulse-width modulated (PWM) sprayer. Results indicate no statistical (p \u3c .05) difference in weed control measured one (M = 96.22%, SD = 3.90 and M = 90.10%, SD = 9.96), two (M = 95.15%, SD = 5.34 and M = 89.64%, SD = 8.58), and three weeks (M = 88.55, SD = 11.07 and M = 81.78%, SD = 13.05) after application between broadcast and site-specific treatments, respectively. Furthermore, there was a significant (p \u3c 0.05) 48% mean reduction in applied area (m2) between broadcast and site-specific treatments across both years. Equivalent post application efficacy can be achieved with significant reductions in herbicides if weeds are targeted through site-specific applications. Site-specific weed maps can be generated and executed using accessible technologies like UAS, open-source CNNs, and PWM sprayers

    Precision Weed Management Based on UAS Image Streams, Machine Learning, and PWM Sprayers

    Get PDF
    Weed populations in agricultural production fields are often scattered and unevenly distributed; however, herbicides are broadcast across fields evenly. Although effective, in the case of post-emergent herbicides, exceedingly more pesticides are used than necessary. A novel weed detection and control workflow was evaluated targeting Palmer amaranth in soybean (Glycine max) fields. High spatial resolution (0.4 cm) unmanned aircraft system (UAS) image streams were collected, annotated, and used to train 16 object detection convolutional neural networks (CNNs; RetinaNet, Faster R-CNN, Single Shot Detector, and YOLO v3) each trained on imagery with 0.4, 0.6, 0.8, and 1.2 cm spatial resolutions. Models were evaluated on imagery from four production fields containing approximately 7,800 weeds. The highest performing model was Faster R-CNN trained on 0.4 cm imagery (precision = 0.86, recall = 0.98, and F1-score = 0.91). A site-specific workflow leveraging the highest performing trained CNN models was evaluated in replicated field trials. Weed control (%) was compared between a broadcast treatment and the proposed site-specific workflow which was applied using a pulse-width modulated (PWM) sprayer. Results indicate no statistical (p \u3c .05) difference in weed control measured one (M = 96.22%, SD = 3.90 and M = 90.10%, SD = 9.96), two (M = 95.15%, SD = 5.34 and M = 89.64%, SD = 8.58), and three weeks (M = 88.55, SD = 11.07 and M = 81.78%, SD = 13.05) after application between broadcast and site-specific treatments, respectively. Furthermore, there was a significant (p \u3c 0.05) 48% mean reduction in applied area (m2) between broadcast and site-specific treatments across both years. Equivalent post application efficacy can be achieved with significant reductions in herbicides if weeds are targeted through site-specific applications. Site-specific weed maps can be generated and executed using accessible technologies like UAS, open-source CNNs, and PWM sprayers

    Development of Modified CNN Algorithm for Agriculture Product: A Research Review

    Get PDF
    Now a day, with the increase in world population, the demand for agricultural products is also increased. Modern days electronic technologies combined with machine vision techniques have become a good resource for precise weed and crop detection in the field. It is becoming prominent in precision agriculture and also supporting site-specific weed management. By reviewing as there are so many different kinds of weed detection algorithms that were already used in the weed removal process or in agriculture. By the comparative study of research papers on weed detection. In this paper, we have suggested advanced and improved algorithms which take care of most of the limitations of previous work. The main goal of this review is to study the different types of algorithms used to detect weeds present in crops for automated systems in agriculture. This paper used a method that is based on a convolutional neural network model, VGG16, to identify images of weeds. As the basic network, VGG16 has very good classification performance, and it is relatively easy to modify. Download the weed dataset. This image dataset has 15336 segments, being 3249 of soil, 7376 soybeans, 3520 grass, and 1191 broadleaf weeds. Our model fixes the first 16 layers of  VGG16 parameters for layer-by-layer automatic extraction of features, adding an average pooling layer, convolution layer, Dropout layer, fully connected layer, and softmax for classifiers. The results show that the final model performs well in the classification effect of 4 classes. The accuracy is 97.76 %. We will compare our result with the CNN model. It provides an accurate and reliable judgment basis for quantitative chemical pesticide spraying. The results of this study can provide an overview of the use of CNN-based techniques for weed detection

    Multi-Spectral Image Synthesis for Crop/Weed Segmentation in Precision Farming

    Full text link
    An effective perception system is a fundamental component for farming robots, as it enables them to properly perceive the surrounding environment and to carry out targeted operations. The most recent approaches make use of state-of-the-art machine learning techniques to learn an effective model for the target task. However, those methods need a large amount of labelled data for training. A recent approach to deal with this issue is data augmentation through Generative Adversarial Networks (GANs), where entire synthetic scenes are added to the training data, thus enlarging and diversifying their informative content. In this work, we propose an alternative solution with respect to the common data augmentation techniques, applying it to the fundamental problem of crop/weed segmentation in precision farming. Starting from real images, we create semi-artificial samples by replacing the most relevant object classes (i.e., crop and weeds) with their synthesized counterparts. To do that, we employ a conditional GAN (cGAN), where the generative model is trained by conditioning the shape of the generated object. Moreover, in addition to RGB data, we take into account also near-infrared (NIR) information, generating four channel multi-spectral synthetic images. Quantitative experiments, carried out on three publicly available datasets, show that (i) our model is capable of generating realistic multi-spectral images of plants and (ii) the usage of such synthetic images in the training process improves the segmentation performance of state-of-the-art semantic segmentation Convolutional Networks.Comment: Submitted to Robotics and Autonomous System

    WeedMap: A large-scale semantic weed mapping framework using aerial multispectral imaging and deep neural network for precision farming

    Full text link
    We present a novel weed segmentation and mapping framework that processes multispectral images obtained from an unmanned aerial vehicle (UAV) using a deep neural network (DNN). Most studies on crop/weed semantic segmentation only consider single images for processing and classification. Images taken by UAVs often cover only a few hundred square meters with either color only or color and near-infrared (NIR) channels. Computing a single large and accurate vegetation map (e.g., crop/weed) using a DNN is non-trivial due to difficulties arising from: (1) limited ground sample distances (GSDs) in high-altitude datasets, (2) sacrificed resolution resulting from downsampling high-fidelity images, and (3) multispectral image alignment. To address these issues, we adopt a stand sliding window approach that operates on only small portions of multispectral orthomosaic maps (tiles), which are channel-wise aligned and calibrated radiometrically across the entire map. We define the tile size to be the same as that of the DNN input to avoid resolution loss. Compared to our baseline model (i.e., SegNet with 3 channel RGB inputs) yielding an area under the curve (AUC) of [background=0.607, crop=0.681, weed=0.576], our proposed model with 9 input channels achieves [0.839, 0.863, 0.782]. Additionally, we provide an extensive analysis of 20 trained models, both qualitatively and quantitatively, in order to evaluate the effects of varying input channels and tunable network hyperparameters. Furthermore, we release a large sugar beet/weed aerial dataset with expertly guided annotations for further research in the fields of remote sensing, precision agriculture, and agricultural robotics.Comment: 25 pages, 14 figures, MDPI Remote Sensin

    AICropCAM: Deploying classification, segmentation, detection, and counting deep-learning models for crop monitoring on the edge

    Get PDF
    Precision Agriculture (PA) promises to meet the future demands for food, feed, fiber, and fuel while keeping their production sustainable and environmentally friendly. PA relies heavily on sensing technologies to inform site-specific decision supports for planting, irrigation, fertilization, spraying, and harvesting. Traditional point-based sensors enjoy small data sizes but are limited in their capacity to measure plant and canopy parameters. On the other hand, imaging sensors can be powerful in measuring a wide range of these parameters, especially when coupled with Artificial Intelligence. The challenge, however, is the lack of computing, electric power, and connectivity infrastructure in agricultural fields, preventing the full utilization of imaging sensors. This paper reported AICropCAM, a field-deployable imaging framework that integrated edge image processing, Internet of Things (IoT), and LoRaWAN for low-power, long-range communication. The core component of AICropCAM is a stack of four Deep Convolutional Neural Networks (DCNN) models running sequentially: CropClassiNet for crop type classification, CanopySegNet for canopy cover quantification, PlantCountNet for plant and weed counting, and InsectNet for insect identification. These DCNN models were trained and tested with \u3e43,000 field crop images collected offline. AICropCAM was embodied on a distributed wireless sensor network with its sensor node consisting of an RGB camera for image acquisition, a Raspberry Pi 4B single-board computer for edge image processing, and an Arduino MKR1310 for LoRa communication and power management. Our testing showed that the time to run the DCNN models ranged from 0.20 s for InsectNet to 20.20 s for CanopySegNet, and power consumption ranged from 3.68 W for InsectNet to 5.83 W for CanopySegNet. The classification model CropClassiNet reported 94.5 % accuracy, and the segmentation model CanopySegNet reported 92.83 % accuracy. The two object detection models PlantCountNet and InsectNet reported mean average precision of 0.69 and 0.02 for the test images. Predictions from the DCNN models were transmitted to the ThingSpeak IoT platform for visualization and analytics. We concluded that AICropCAM successfully implemented image processing on the edge, drastically reduced the amount of data being transmitted, and could satisfy the real-time need for decision-making in PA. AICropCAM can be deployed on moving platforms such as center pivots or drones to increase its spatial coverage and resolution to support crop monitoring and field operations
    • …
    corecore