5 research outputs found
Multi-Spectral Image Synthesis for Crop/Weed Segmentation in Precision Farming
An effective perception system is a fundamental component for farming robots,
as it enables them to properly perceive the surrounding environment and to
carry out targeted operations. The most recent approaches make use of
state-of-the-art machine learning techniques to learn an effective model for
the target task. However, those methods need a large amount of labelled data
for training. A recent approach to deal with this issue is data augmentation
through Generative Adversarial Networks (GANs), where entire synthetic scenes
are added to the training data, thus enlarging and diversifying their
informative content. In this work, we propose an alternative solution with
respect to the common data augmentation techniques, applying it to the
fundamental problem of crop/weed segmentation in precision farming. Starting
from real images, we create semi-artificial samples by replacing the most
relevant object classes (i.e., crop and weeds) with their synthesized
counterparts. To do that, we employ a conditional GAN (cGAN), where the
generative model is trained by conditioning the shape of the generated object.
Moreover, in addition to RGB data, we take into account also near-infrared
(NIR) information, generating four channel multi-spectral synthetic images.
Quantitative experiments, carried out on three publicly available datasets,
show that (i) our model is capable of generating realistic multi-spectral
images of plants and (ii) the usage of such synthetic images in the training
process improves the segmentation performance of state-of-the-art semantic
segmentation Convolutional Networks.Comment: Submitted to Robotics and Autonomous System
Crop and Weed Classification Using Pixel-wise Segmentation on Ground and Aerial Images
none5openMulham Fawakherji, Ali Youssef, Domenico D. Bloisi, Alberto Pretto, Daniele NardiFawakherji, Mulham; Youssef, Ali; Bloisi, Domenico D.; Pretto, Alberto; Nardi, Daniel
Crop and Weeds Classification for Precision Agriculture Using Context-Independent Pixel-Wise Segmentation
Precision agriculture is gaining increasing attention because of the possible reduction of agricultural inputs (e.g., fertilizers and pesticides) that can be obtained by using hightech equipment, including robots. In this paper, we focus on an agricultural robotics system that addresses the weeding problem by means of selective spraying or mechanical removal of the detected weeds. In particular, we describe a deep learning based method to allow a robot to perform an accurate weed/crop classification using a sequence of two Convolutional Neural Networks (CNNs) applied to RGB images. The first network, based on an encoder-decoder segmentation architecture, performs a pixelwise, plant-type agnostic, segmentation between vegetation and soil that enables to extract a set of connected blobs representing plant instances. We show that such network can be trained also using external, ready to use pixel-wise labeled data sets coming from different contexts. Each plant is hence classified between crop and weeds by using the second network. Quantitative experimental results, obtained on real world data, demonstrate that the proposed approach can achieve good classification results also on challenging images
Robotics for Precision Agriculture @DIAG
Flourish is a recent H2020 project, whose aim was to develop a multi-platform robotic solution for precision agriculture, combining a micro UAV and
a UGV. The aim of this document is to sketch the contribution of Sapienza Univ. of Rome in the context of the Flourish project, as well as the current
follow-up activities in precision agriculture
Weakly and semi-supervised detection, segmentation and tracking of table grapes with limited and noisy data
Detection, segmentation and tracking of fruits and vegetables are three fundamental tasks for precision agriculture, enabling robotic harvesting and yield estimation applications. However, modern algorithms are data hungry and it is not always possible to gather enough data to apply the best performing supervised approaches. Since data collection is an expensive and cumbersome task, the enabling technologies for using computer vision in agriculture are often out of reach for small businesses. Following previous work in this context (Ciarfuglia et al., 2022), where we proposed an initial weakly supervised solution to reduce the data needed to get state-of-the-art detection and segmentation in precision agriculture applications, here we improve that system and explore the problem of tracking fruits in orchards. We present the case of vineyards of table grapes in southern Lazio (Italy) since grapes are a difficult fruit to segment due to occlusion, colour and general illumination conditions. We consider the case in which there is some initial labelled data that could work as source data (e.g. wine grape data), but it is considerably different from the target data (e.g. table grape data). To improve detection and segmentation on the target data, we propose to train the segmentation algorithm with a weak bounding box label, while for tracking we leverage 3D Structure from Motion algorithms to generate new labels from already labelled samples. Finally, the two systems are combined in a full semi-supervised approach. Comparisons with state-of-the-art supervised solutions show how our methods are able to train new models that achieve high performances with few labelled images and with very simple labelling