674 research outputs found

    Real-time Semantic Segmentation of Crop and Weed for Precision Agriculture Robots Leveraging Background Knowledge in CNNs

    Full text link
    Precision farming robots, which target to reduce the amount of herbicides that need to be brought out in the fields, must have the ability to identify crops and weeds in real time to trigger weeding actions. In this paper, we address the problem of CNN-based semantic segmentation of crop fields separating sugar beet plants, weeds, and background solely based on RGB data. We propose a CNN that exploits existing vegetation indexes and provides a classification in real time. Furthermore, it can be effectively re-trained to so far unseen fields with a comparably small amount of training data. We implemented and thoroughly evaluated our system on a real agricultural robot operating in different fields in Germany and Switzerland. The results show that our system generalizes well, can operate at around 20Hz, and is suitable for online operation in the fields.Comment: Accepted for publication at IEEE International Conference on Robotics and Automation 2018 (ICRA 2018

    Automatic Model Based Dataset Generation for Fast and Accurate Crop and Weeds Detection

    Full text link
    Selective weeding is one of the key challenges in the field of agriculture robotics. To accomplish this task, a farm robot should be able to accurately detect plants and to distinguish them between crop and weeds. Most of the promising state-of-the-art approaches make use of appearance-based models trained on large annotated datasets. Unfortunately, creating large agricultural datasets with pixel-level annotations is an extremely time consuming task, actually penalizing the usage of data-driven techniques. In this paper, we face this problem by proposing a novel and effective approach that aims to dramatically minimize the human intervention needed to train the detection and classification algorithms. The idea is to procedurally generate large synthetic training datasets randomizing the key features of the target environment (i.e., crop and weed species, type of soil, light conditions). More specifically, by tuning these model parameters, and exploiting a few real-world textures, it is possible to render a large amount of realistic views of an artificial agricultural scenario with no effort. The generated data can be directly used to train the model or to supplement real-world images. We validate the proposed methodology by using as testbed a modern deep learning based image segmentation architecture. We compare the classification results obtained using both real and synthetic images as training data. The reported results confirm the effectiveness and the potentiality of our approach.Comment: To appear in IEEE/RSJ IROS 201

    An effective identification of crop diseases using faster region based convolutional neural network and expert systems

    Get PDF
    The majority of research Study is moving towards cognitive computing, ubiquitous computing, internet of things (IoT) which focus on some of the real time applications like smart cities, smart agriculture, wearable smart devices. The objective of the research in this paper is to integrate the image processing strategies to the smart agriculture techniques to help the farmers to use the latest innovations of technology in order to resolve the issues of crops like infections or diseases to their crops which may be due to bugs or due to climatic conditions or may be due to soil consistency. As IoT is playing a crucial role in smart agriculture, the concept of infection recognition using object recognition the image processing strategy can help out the farmers greatly without making them to learn much about the technology and also helps them to sort out the issues with respect to crop. In this paper, an attempt of integrating kissan application with expert systems and image processing is made in order to help the farmers to have an immediate solution for the problem identified in a crop

    Early detection of weed in sugarcane using convolutional neural network

    Get PDF
    Weed infestation is an essential factor in sugarcane productivity loss. The use of remote sensing data in conjunction with Artificial Intelligence (AI) techniques, can lead the cultivation of sugarcane to a new level in terms of weed control. For this purpose, an algorithm based on Convolutional Neural Networks (CNN) was developed to detect, quantify, and map weeds in sugarcane areas located in the state of Alagoas, Brazil. Images of the PlanetScope satellite were subdivided, separated, trained in different scenarios, classified and georeferenced, producing a map with weed information included. Scenario one of the CNN training and test presented overall accuracy (0,983), and it was used to produce the final mapping of forest areas, sugarcane, and weed infestation. The quantitative analysis of the area (ha) infested by weed indicated a high probability of a negative impact on sugarcane productivity. It is recommended that the adequacy of CNN’s algorithm for Remotely Piloted Aircraft (RPA) images be carried out, aiming at the differentiation between weed species, as well as its application in the detection in areas with different culture crop

    Semantic Segmentation based deep learning approaches for weed detection

    Get PDF
    Global increase in herbicide use to control weeds has led to issues such as evolution of herbicide-resistant weeds, off-target herbicide movement, etc. Precision agriculture advocates Site Specific Weed Management (SSWM) application to achieve precise and right amount of herbicide spray and reduce off-target herbicide movement. Recent advancements in Deep Learning (DL) have opened possibilities for adaptive and accurate weed recognitions for field based SSWM applications with traditional and emerging spraying equipment; however, challenges exist in identifying the DL model structure and train the model appropriately for accurate and rapid model applications over varying crop/weed growth stages and environment. In our study, an encoder-decoder based DL architecture was proposed that performs pixel-wise Semantic Segmentation (SS) classifications of crop, soil, and weed patches in the fields. The objective of this study was to develop a robust weed detection algorithm using DL techniques that can accurately and reliably locate weed infestations in low altitude Unmanned Aerial Vehicle (UAV) imagery with acceptable application speed. Two different encoder-decoder based SS models of LinkNet and UNet were developed using transfer learning techniques. We performed various measures such as backpropagation optimization and refining of the dataset used for training to address the class-imbalance problem which is a common issue in developing weed detection models. It was found that LinkNet model with ResNet18 as the encoder section and use of ‘Focal loss’ loss function was able to achieve the highest mean and class-wise Intersection over Union scores for different class categories while performing predictions on unseen dataset. The developed state-of-art model did not require a large amount of data during training and the techniques used to develop the model in our study provides a propitious opportunity that performs better than the existing SS based weed detections models. The proposed model integrates a futuristic approach to develop a model that could be used for weed detection on aerial imagery from UAV and perform real-time SSWM applications Advisor: Yeyin Sh

    Development of Modified CNN Algorithm for Agriculture Product: A Research Review

    Get PDF
    Now a day, with the increase in world population, the demand for agricultural products is also increased. Modern days electronic technologies combined with machine vision techniques have become a good resource for precise weed and crop detection in the field. It is becoming prominent in precision agriculture and also supporting site-specific weed management. By reviewing as there are so many different kinds of weed detection algorithms that were already used in the weed removal process or in agriculture. By the comparative study of research papers on weed detection. In this paper, we have suggested advanced and improved algorithms which take care of most of the limitations of previous work. The main goal of this review is to study the different types of algorithms used to detect weeds present in crops for automated systems in agriculture. This paper used a method that is based on a convolutional neural network model, VGG16, to identify images of weeds. As the basic network, VGG16 has very good classification performance, and it is relatively easy to modify. Download the weed dataset. This image dataset has 15336 segments, being 3249 of soil, 7376 soybeans, 3520 grass, and 1191 broadleaf weeds. Our model fixes the first 16 layers of  VGG16 parameters for layer-by-layer automatic extraction of features, adding an average pooling layer, convolution layer, Dropout layer, fully connected layer, and softmax for classifiers. The results show that the final model performs well in the classification effect of 4 classes. The accuracy is 97.76 %. We will compare our result with the CNN model. It provides an accurate and reliable judgment basis for quantitative chemical pesticide spraying. The results of this study can provide an overview of the use of CNN-based techniques for weed detection

    Weeds detection efficiency through different convolutional neural networks technology

    Get PDF
    The preservation of the environment has become a priority and a subject that is receiving more and more attention. This is particularly important in the field of precision agriculture, where pesticide and herbicide use has become more controlled. In this study, we propose to evaluate the ability of the deep learning (DL) and convolutional neural network (CNNs) technology to detect weeds in several types of crops using a perspective and proximity images to enable localized and ultra-localized herbicide spraying in the region of Beni Mellal in Morocco. We studied the detection of weeds through six recent CNN known for their speed and precision, namely, VGGNet (16 and 19), GoogLeNet (Inception V3 and V4) and MobileNet (V1 and V2). The first experiment was performed with the CNNs architectures from scratch and the second experiment with their pre-trained versions. The results showed that Inception V4 achieved the highest precision with a rate of 99.41% and 99.51% on the mixed image sets and for its version from scratch and its pre-trained version respectively, and that MobileNet V2 was the fastest and lightest with its size of 14 MB
    • …
    corecore