114 research outputs found

    Real-time Semantic Segmentation of Crop and Weed for Precision Agriculture Robots Leveraging Background Knowledge in CNNs

    Full text link
    Precision farming robots, which target to reduce the amount of herbicides that need to be brought out in the fields, must have the ability to identify crops and weeds in real time to trigger weeding actions. In this paper, we address the problem of CNN-based semantic segmentation of crop fields separating sugar beet plants, weeds, and background solely based on RGB data. We propose a CNN that exploits existing vegetation indexes and provides a classification in real time. Furthermore, it can be effectively re-trained to so far unseen fields with a comparably small amount of training data. We implemented and thoroughly evaluated our system on a real agricultural robot operating in different fields in Germany and Switzerland. The results show that our system generalizes well, can operate at around 20Hz, and is suitable for online operation in the fields.Comment: Accepted for publication at IEEE International Conference on Robotics and Automation 2018 (ICRA 2018

    Multi-Spectral Image Synthesis for Crop/Weed Segmentation in Precision Farming

    Full text link
    An effective perception system is a fundamental component for farming robots, as it enables them to properly perceive the surrounding environment and to carry out targeted operations. The most recent approaches make use of state-of-the-art machine learning techniques to learn an effective model for the target task. However, those methods need a large amount of labelled data for training. A recent approach to deal with this issue is data augmentation through Generative Adversarial Networks (GANs), where entire synthetic scenes are added to the training data, thus enlarging and diversifying their informative content. In this work, we propose an alternative solution with respect to the common data augmentation techniques, applying it to the fundamental problem of crop/weed segmentation in precision farming. Starting from real images, we create semi-artificial samples by replacing the most relevant object classes (i.e., crop and weeds) with their synthesized counterparts. To do that, we employ a conditional GAN (cGAN), where the generative model is trained by conditioning the shape of the generated object. Moreover, in addition to RGB data, we take into account also near-infrared (NIR) information, generating four channel multi-spectral synthetic images. Quantitative experiments, carried out on three publicly available datasets, show that (i) our model is capable of generating realistic multi-spectral images of plants and (ii) the usage of such synthetic images in the training process improves the segmentation performance of state-of-the-art semantic segmentation Convolutional Networks.Comment: Submitted to Robotics and Autonomous System

    Simulation of near infrared sensor in unity for plant-weed segmentation classification

    Get PDF
    Weed spotting through image classification is one of the methods applied in precision agriculture to increase efficiency in crop damage reduction. These classifications are nowadays typically based on deep machine learning with convolutional neural networks (CNN), where a main difficulty is gathering large amounts of labeled data required for the training of these networks. Thus, synthetic dataset sources have been developed including simulations based on graphic engines; however, some data inputs that can improve the performance of CNNs like the near infrared (NIR) have not been considered in these simulations. This paper presents a simulation in the Unity game engine that builds fields of sugar beets with weeds. Images are generated to create datasets that are ready to train CNNs for semantic segmentation. The dataset is tested by comparing classification results from the bonnet CNN network trained with synthetic images and trained with real images, both with RGB and RGBN (RGB+near infrared) as inputs. The preliminary results suggest that the addition of the NIR channel to the simulation for plant-weed segmentation can be effectively exploited. These show a difference of 5.75% for the global mean IoU over 820 classified images by including the NIR data in the unity generated dataset

    Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture

    Get PDF
    Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds in order to perform selective treatments, increase yield and crop health while reducing the amount of chemicals used. Deep learning approaches have recently achieved both excellent classification performance and real-time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labelling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep-learning-based classifiers for different crop types, with the goal of reducing the retraining time and labelling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds, and compare the performance and retraining efforts required when using data labelled at pixel level with partially labelled data obtained through a less time-consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible, and reduces training times for up to 80%80\%. Furthermore, we show that even when the data used for re-training is imperfectly annotated, the classification performance is within 2%2\% of that of networks trained with laboriously annotated pixel-precision data

    Design of an Autonomous Agriculture Robot for Real Time Weed Detection using CNN

    Full text link
    Agriculture has always remained an integral part of the world. As the human population keeps on rising, the demand for food also increases, and so is the dependency on the agriculture industry. But in today's scenario, because of low yield, less rainfall, etc., a dearth of manpower is created in this agricultural sector, and people are moving to live in the cities, and villages are becoming more and more urbanized. On the other hand, the field of robotics has seen tremendous development in the past few years. The concepts like Deep Learning (DL), Artificial Intelligence (AI), and Machine Learning (ML) are being incorporated with robotics to create autonomous systems for various sectors like automotive, agriculture, assembly line management, etc. Deploying such autonomous systems in the agricultural sector help in many aspects like reducing manpower, better yield, and nutritional quality of crops. So, in this paper, the system design of an autonomous agricultural robot which primarily focuses on weed detection is described. A modified deep-learning model for the purpose of weed detection is also proposed. The primary objective of this robot is the detection of weed on a real-time basis without any human involvement, but it can also be extended to design robots in various other applications involved in farming like weed removal, plowing, harvesting, etc., in turn making the farming industry more efficient. Source code and other details can be found at https://github.com/Dhruv2012/Autonomous-Farm-RobotComment: Published at the AVES 2021 conference. Source code and other details can be found at https://github.com/Dhruv2012/Autonomous-Farm-Robo

    Using Semi-Supervised Learning to Predict Weed Density and Distribution for Precision Farming

    Get PDF
    If weed growth is not controlled, it can have a devastating effect on the size and quality of a harvest. Unrestrained pesticide use for weed management can have severe consequences for ecosystem health and contribute to environmental degradation. However, if you can identify problem spots, you can more precisely treat those areas with insecticide. As a result of recent advances in the analysis of farm pictures, techniques have been developed for reliably identifying weed plants. . On the other hand, these methods mostly use supervised learning strategies, which require a huge set of pictures that have been labelled by hand. Therefore, these monitored systems are not practicable for the individual farmer because of the vast variety of plant species being cultivated. In this paper, we propose a semi-supervised deep learning method that uses a small number of colour photos taken by unmanned aerial vehicles to accurately predict the number and location of weeds in farmlands. Knowing the number and location of weeds is helpful for a site-specific weed management system in which only afflicted areas are treated by autonomous robots. In this research, the foreground vegetation pixels (including crops and weeds) are first identified using an unsupervised segmentation method based on a Convolutional Neural Network (CNN). There is then no need for manually constructed features since a trained CNN is used to pinpoint polluted locations. Carrot plants from the (1) Crop Weed Field Image Dataset (CWFID) and sugar beet plants from the (2) Sugar Beets dataset are used to test the approach. The proposed method has a maximum recall of 0.9 and an accuracy of 85%, making it ideal for locating weed hotspots. So, it is shown that the proposed strategy may be used for too many kinds of plants without having to collect a huge quantity of labelled data
    • …
    corecore