14 research outputs found

    A Tracked Mobile Robotic Lab for Monitoring the Plants Volume and Health

    Get PDF
    9noPrecision agriculture has been increasingly recognized for its potential ability to improve agricultural productivity, reduce production cost, and minimize damage to the environment. In this work, the current stage of our research in developing a mobile platform equipped with different sensors for orchard monitoring and sensing is presented. In particular, the mobile platform is conceived to monitor and assess both the geometric and volumetric conditions as well as the health state of the canopy. To do so, different sensors have been integrated and efficient data-processing algorithms implemented for a reliable crop monitoring. Experimental tests have been performed allowing to obtain both a precise volume reconstruction of several plants and an NDVI mapping suitable for vegetation state evaluations.openopenopenBietresato, M; Carabin, G; D’Auria, D; Gallo, R; Gasparetto, A.; Ristorto, G; Mazzetto, F; Vidoni, R; Scalera, L.Bietresato, M; Carabin, G; D’Auria, D; Gallo, R; Gasparetto, Alessandro; Ristorto, G; Mazzetto, F; Vidoni, R; Scalera, Lorenz

    Acquiring plant features with optical sensing devices in an organic strip-cropping system

    Get PDF
    The SUREVEG project focuses on improvement of biodiversity and soil fertility in organic agriculture through strip-cropping systems. To counter the additional workforce a robotic tool is proposed. Within the project, a modular proof of concept (POC) version will be produced that will combine detection technologies with actuation on a single-plant level in the form of a robotic arm. This article focuses on the detection of crop characteristics through point clouds obtained with two lidars. Segregation in soil and plants was successfully achieved without the use of additional data from other sensor types, by calculating weighted sums, resulting in a dynamically obtained threshold criterion. This method was able to extract the vegetation from the point cloud in strips with varying vegetation coverage and sizes. The resulting vegetation clouds were compared to drone imagery, to prove they perfectly matched all green areas in said image. By dividing the remaining clouds of overlapping plants by means of the nominal planting distance, the number of plants, their volumes, and thereby the expected yields per row could be determined.</p

    An analysis of the effects of water regime on grapevine canopy status using a UAV and a mobile robot

    Get PDF
    In this paper, we propose a novel approach for analyzing the effects of water regime on grapevine canopy status using robotics as an aid for monitoring and mapping. Data from an unmanned aerial vehicle (UAV) and a ground mobile robot are used to obtain multispectral images and multiple vegetation indexes, and the 3D reconstruction of the canopy, respectively. Unlike previous works, sixty vegetation indexes are computed precisely by using the projected area of the vineyard point cloud as a mask. Extensive experimental tests on repeated plots of Pinot gris vines show that the GDVI, PVI, and TGI vegetation indexes are positively correlated with the water potential: GDVI (R2=0.90 and 0.57 for the stem and pre-dawn water potential, respectively), PVI (R2=0.90 and 0.57), TGI (R2=0.87 and 0.77). Furthermore, the canopy volume and the canopy area projected on the ground are impacted by the water status, as well as stem and pre-dawn water potential measurements. The results obtained in this work demonstrate the feasibility of the proposed approach and the potential of robotic technologies, supporting precision viticulture

    Deep neural networks for grape bunch segmentation in natural images from a consumer-grade camera

    Get PDF
    AbstractPrecision agriculture relies on the availability of accurate knowledge of crop phenotypic traits at the sub-field level. While visual inspection by human experts has been traditionally adopted for phenotyping estimations, sensors mounted on field vehicles are becoming valuable tools to increase accuracy on a narrower scale and reduce execution time and labor costs, as well. In this respect, automated processing of sensor data for accurate and reliable fruit detection and characterization is a major research challenge, especially when data consist of low-quality natural images. This paper investigates the use of deep learning frameworks for automated segmentation of grape bunches in color images from a consumer-grade RGB-D camera, placed on-board an agricultural vehicle. A comparative study, based on the estimation of two image segmentation metrics, i.e. the segmentation accuracy and the well-known Intersection over Union (IoU), is presented to estimate the performance of four pre-trained network architectures, namely the AlexNet, the GoogLeNet, the VGG16, and the VGG19. Furthermore, a novel strategy aimed at improving the segmentation of bunch pixels is proposed. It is based on an optimal threshold selection of the bunch probability maps, as an alternative to the conventional minimization of cross-entropy loss of mutually exclusive classes. Results obtained in field tests show that the proposed strategy improves the mean segmentation accuracy of the four deep neural networks in a range between 2.10 and 8.04%. Besides, the comparative study of the four networks demonstrates that the best performance is achieved by the VGG19, which reaches a mean segmentation accuracy on the bunch class of 80.58%, with IoU values for the bunch class of 45.64%

    From the extraction of currently fulfilled requirements to value curves: a case-study in the field of harvesting machines for shell fruits and lessons learnt in engineering design

    Get PDF
    The market for agricultural machinery is characterized by products with a high degree of maturity in the product life cycle. Consequently, current improvements in new machinery are predominantly incremental and new projects basically use solutions that are already consolidated. This makes this domain appropriate for benchmarking existing systems and envisioning new value propositions. The present paper deals primarily with the former and uses the value curves as a means to structure the comparison among different families of technical systems; in particular, harvesting machines for shell fruits from the ground surface, e.g., chestnuts, walnuts, and hazelnuts, were investigated here. The process of building value curves requires the identification of currently fulfilled requirements. Despite the attention paid by engineering design research to requirements, a structured process is lacking to extract relevant information and create value curves or other representations useful for benchmarking. The present paper approaches this problem and presents how the authors have individuated relevant knowledge for characterizing different categories of harvesting machines. Namely, after an extensive search of the scientific literature and patents, a critical review of existing machines, aimed at individuating their functioning principles, architecture, and attitude in fulfilling specific design requirements, was performed. Then, existing machines were classified in 8 main categories, and their strengths and weaknesses were identified with reference to 11 competing factors. The consequent construction of value curves enabled the identification of possible points of intervention by hypothesizing possible future evolutions of such machinery, both in a structural and in a value-based perspective. Limitations about the repeatability of the followed approach and possible repercussions on design research are discussed

    LiDARPheno – A Low-Cost LiDAR-Based 3D Scanning System for Leaf Morphological Trait Extraction

    Get PDF
    The ever-growing world population brings the challenge for food security in the current world. The gene modification tools have opened a new era for fast-paced research on new crop identification and development. However, the bottleneck in the plant phenotyping technology restricts the alignment in geno–pheno development as phenotyping is the key for the identification of potential crop for improved yield and resistance to the changing environment. Various attempts to making the plant phenotyping a “high-throughput” have been made while utilizing the existing sensors and technology. However, the demand for ‘good’ phenotypic information for linkage to the genome in understanding the gene-environment interactions is still a bottleneck in the plant phenotyping technologies. Moreover, the available technologies and instruments are inaccessible, expensive, and sometimes bulky. This work attempts to address some of the critical problems, such as exploration and development of a low-cost LiDAR-based platform for phenotyping the plants in-lab and in-field. A low-cost LiDAR-based system design, LiDARPheno, is introduced in this work to assess the feasibility of the inexpensive LiDAR sensor in the leaf trait (length, width, and area) extraction. A detailed design of the LiDARPheno, based on low-cost and off-the-shelf components and modules, is presented. Moreover, the design of the firmware to control the hardware setup of the system and the user-level python-based script for data acquisition is proposed. The software part of the system utilizes the publicly available libraries and Application Programming Interfaces (APIs), making it easy to implement the system by a non-technical user. The LiDAR data analysis methods are presented, and algorithms for processing the data and extracting the leaf traits are developed. The processing includes conversion, cleaning/filtering, segmentation and trait extraction from the LiDAR data. Experiments on indoor plants and canola plants were performed for the development and validation of the methods for estimation of the leaf traits. The results of the LiDARPheno based trait extraction are compared with the SICK LMS400 (a commercial 2D LiDAR) to assess the performance of the developed system
    corecore