51 research outputs found

    Eye-safe lidar system for pesticide spray drift measurement

    Get PDF
    Spray drift is one of the main sources of pesticide contamination. For this reason, an accurate understanding of this phenomenon is necessary in order to limit its effects. Nowadays, spray drift is usually studied by using in situ collectors which only allow time-integrated sampling of specific points of the pesticide clouds. Previous research has demonstrated that the light detection and ranging (lidar) technique can be an alternative for spray drift monitoring. This technique enables remote measurement of pesticide clouds with high temporal and distance resolution. Despite these advantages, the fact that no lidar instrument suitable for such an application is presently available has appreciably limited its practical use. This work presents the first eye-safe lidar system specifically designed for the monitoring of pesticide clouds. Parameter design of this system is carried out via signal-to-noise ratio simulations. The instrument is based on a 3-mJ pulse-energy erbium-doped glass laser, an 80-mm diameter telescope, an APD optoelectronic receiver and optomechanically adjustable components. In first test measurements, the lidar system has been able to measure a topographic target located over 2 km away. The instrument has also been used in spray drift studies, demonstrating its capability to monitor the temporal and distance evolution of several pesticide clouds emitted by air-assisted sprayers at distances between 50 and 100 m.Peer ReviewedPostprint (published version

    LiDAR simulation in modelled orchards to optimise the use of terretrial laser scanners and derived vegetative measures

    Get PDF
    Light detection and ranging (LiDAR) technology is beginning to have an impact on agriculture. Canopy volume and/or fruit tree leaf area can be estimated using terrestrial laser sensors based on this technology. However, the use of these devices may have different options depending on the resolution and scanning mode. As a consequence, data accuracy and LiDAR derived parameters are affected by sensor configuration, and may vary according to vegetative characteristics of tree crops. Given this scenario, users and suppliers of these devices need to know how to use the sensor in each case. This paper presents a computer program to determine the best configuration, allowing simulation and evaluation of different LiDAR configurations in various tree structures (or training systems). The ultimate goal is to optimise the use of laser scanners in field operations. The software presented generates a virtual orchard, and then allows the scanning simulation with a laser sensor. Trees are created using a hidden Markov tree (HMT) model. Varying the foliar structure of the orchard the LiDAR simulation was applied to twenty different artificially created orchards with or without leaves from two positions (lateral and zenith). To validate the laser sensor configuration, leaf surface of simulated trees was compared with the parameters obtained by LiDAR measurements: the impacted leaf area, the impacted total area (leaves and wood), and th impacted area in the three outer layers of leaves

    Sistema LiDAR para evaluar la deriva. Medidas con diferentes tipos y calibres de boquilla

    Get PDF
    Eduard Gregorio1, Xavier Torrent1, Santiago Planas1, Joan R. Rosell-Polo11 Grupo de Investigación en AgróTICa y Agricultura de Precisión (GRAP), Departamento de Ingeniería Agroforestal, Universitat de Lleida (UdL) – Agrotecnio Center, Lleida, España. ([email protected]) Resumen La deriva es uno de los principales problemas asociados a la aplicación de productos fitosanitarios en tanto que conlleva importantes riesgos para la salud de personas y animales, y supone una fuente de polución de primera magnitud. Los métodos habitualmente utilizados para evaluar la deriva en campo resultan muy costosos tanto en términos de recursos humanos como de tiempo, requiriendo además la realización de análisis químicos a posteriori. Ante la necesidad de disponer de métodos de evaluación más eficientes, se ha desarrollado un sistema LiDAR (light detection and ranging) ocularmente seguro y específicamente diseñado para la detección y medida de la deriva en campo. Se trata de un instrumento de teledetección activa basado en un emisor láser pulsado con 1.5 µm de longitud de onda y 3 mJ de energía, y con un área receptora de 80 mm de diámetro. El sistema es capaz de monitorizar las nubes de deriva en tiempo real y con elevada resolución en distancia. En este trabajo se presentan 23 ensayos de pulverización realizados con la máquina estática y donde se utilizó el sistema LiDAR para medir la deriva generada. Las medidas LiDAR permitieron diferenciar claramente las boquillas convencionales de las boquillas de reducción de deriva, proporcionando estas últimas una disminución de la deriva comprendida entre el 88.6% y el 93.6%. Por otro lado, fue posible ordenar las boquillas convencionales de distinto calibre de acuerdo con los potenciales de reducción determinados con el LiDAR.Palabras clave: láser, pulverizador, teledetección, productos fitosanitarios, boquillas de reducción de deriva.

    Deciduous tree reconstruction algorithm based on cylinder fitting from mobile terrestrial laser scanned point clouds

    Full text link
    Vector reconstruction of objects from an unstructured point cloud obtained with a LiDAR-based system (light detection and ranging) is one of the most promising methods to build three dimensional models of orchards. The cylinder fitting method for woody structure reconstruction of leafless trees from point clouds obtained with a mobile terrestrial laser scanner (MTLS) has been analysed. The advantage of this method is that it performs reconstruction in a single step. The most time consuming part of the algorithm is generation of the cylinder direction, which must be recalculated at the inclusion of each point in the cylinder. The tree skeleton is obtained at the same time as the cluster of cylinders is formed. The method does not guarantee a unique convergence and the reconstruction parameter values must be carefully chosen. A balanced processing of clusters has also been defined which has proven to be very efficient in terms of processing time by following the hierarchy of branches, predecessors and successors. The algorithm was applied to simulated MTLS of virtual orchard models and to MTLS data of real orchards. The constraints applied in the method have been reviewed to ensure better convergence and simpler use of parameters. The results obtained show a correct reconstruction of the woody structure of the trees and the algorithm runs in linear logarithmic tim

    Uso de redes neuronales convolucionales para la detección remota de frutos con cámaras RGB-D

    Get PDF
    La detección remota de frutos será una herramienta indispensable para la gestión agronómica optimizada y sostenible de las plantaciones frutícolas del futuro, con aplicaciones en previsión de cosecha, robotización de la recolección y elaboración de mapas de producción. Este trabajo propone el uso de cámaras de profundidad RGB-D para la detección y la posterior localización 3D de los frutos. El material utilizado para la adquisición de datos consiste en una plataforma terrestre autopropulsada equipada con dos sensores Kinect v2 de Microsoft y un sistema de posicionamiento RTK-GNSS, ambos conectados a un ordenador de campo que se comunica con los sensores mediante un software desarrollado ad-hoc. Con este equipo se escanearon 3 filas de manzanos Fuji de una explotación comercial. El conjunto de datos adquiridos está compuesto por 110 capturas que contienen un total de 12,838 manzanas Fuji. La detección de frutos se realizó mediante los datos RGB (imágenes de color proporcionadas por el sensor). Para ello, se implementó y se entrenó una red neuronal convolucional de detección de objetos Faster R-CNN. Los datos de profundidad (imagen de profundidad proporcionada por el sensor) se utilizaron para generar las nubes de puntos 3D, mientras que los datos de posición permitieron georreferenciar cada captura. Los resultados de test muestran un porcentaje de detección del 91.4% de los frutos con un 15.9% de falsos positivos (F1-score = 0.876). La evaluación cualitativa de las detecciones muestra que los falsos positivos corresponden a zonas de la imagen que presentan un patrón muy similar a una manzana, donde, incluso a percepción del ojo humano, es difícil de determinar si existe o no manzana. Por otro lado, las manzanas no detectadas corresponden a aquellas que estaban ocultas casi en su totalidad por otros órganos vegetativos (hojas o ramas), a manzanas cortadas por los márgenes de la imagen, o bien a errores humanos en el proceso de etiquetaje del dataset. El tiempo de computación medio fue de 17.3 imágenes por segundo, lo que permite su aplicación en tiempo real. De los resultados experimentales se concluye que el sensor Kinect v2 tiene un gran potencial para la detección y localización 3D de frutos. La principal limitación del sistema es que el rendimiento del sensor de profundidad se ve afectado en condiciones de alta iluminación. Palabras clave: Cámaras de profundidad, RGB-D, Detección de frutos, Redes neuronales convolucionales, Robótica agrícol

    Advanced technologies for the improvement of spray application techniques in spanish viticulture: an overview

    Get PDF
    Spraying techniques have been undergoing continuous evolution in recent decades. This paper presents part of the research work carried out in Spain in the field of sensors for characterizing vineyard canopies and monitoring spray drift in order to improve vineyard spraying and make it more sustainable. Some methods and geostatistical procedures for mapping vineyard parameters are proposed, and the development of a variable rate sprayer is described. All these technologies are interesting in terms of adjusting the amount of pesticides applied to the target canopy.Postprint (published version

    Looking behind occlusions: A study on amodal segmentation for robust on-tree apple fruit size estimation

    Get PDF
    The detection and sizing of fruits with computer vision methods is of interest because it provides relevant information to improve the management of orchard farming. However, the presence of partially occluded fruits limits the performance of existing methods, making reliable fruit sizing a challenging task. While previous fruit segmentation works limit segmentation to the visible region of fruits (known as modal segmentation), in this work we propose an amodal segmentation algorithm to predict the complete shape, which includes its visible and occluded regions. To do so, an end-to-end convolutional neural network (CNN) for simultaneous modal and amodal instance segmentation was implemented. The predicted amodal masks were used to estimate the fruit diameters in pixels. Modal masks were used to identify the visible region and measure the distance between the apples and the camera using the depth image. Finally, the fruit diameters in millimetres (mm) were computed by applying the pinhole camera model. The method was developed with a Fuji apple dataset consisting of 3925 RGB-D images acquired at different growth stages with a total of 15,335 annotated apples, and was subsequently tested in a case study to measure the diameter of Elstar apples at different growth stages. Fruit detection results showed an F1-score of 0.86 and the fruit diameter results reported a mean absolute error (MAE) of 4.5 mm and R2 = 0.80 irrespective of fruit visibility. Besides the diameter estimation, modal and amodal masks were used to automatically determine the percentage of visibility of measured apples. This feature was used as a confidence value, improving the diameter estimation to MAE = 2.93 mm and R2 = 0.91 when limiting the size estimation to fruits detected with a visibility higher than 60%. The main advantages of the present methodology are its robustness for measuring partially occluded fruits and the capability to determine the visibility percentage. The main limitation is that depth images were generated by means of photogrammetry methods, which limits the efficiency of data acquisition. To overcome this limitation, future works should consider the use of commercial RGB-D sensors. The code and the dataset used to evaluate the method have been made publicly available at https://github.com/GRAP-UdL-AT/Amodal_Fruit_SizingThis work was partly funded by the Departament de Recerca i Universitats de la Generalitat de Catalunya (grant 2021 LLAV 00088), the Spanish Ministry of Science, Innovation and Universities (grants RTI2018-094222-B-I00 [PAgFRUIT project], PID2021-126648OB-I00 [PAgPROTECT project] and PID2020-117142GB-I00 [DeeLight project] by MCIN/AEI/10.13039/501100011033 and by “ERDF, a way of making Europe”, by the European Union). The work of Jordi Gené Mola was supported by the Spanish Ministry of Universities through a Margarita Salas postdoctoral grant funded by the European Union - NextGenerationEU. We would also like to thank Nufri (especially Santiago Salamero and Oriol Morreres) for their support during data acquisition, and Pieter van Dalfsen and Dirk de Hoog from Wageningen University & Research for additional data collection used in the case study.info:eu-repo/semantics/publishedVersio

    Discriminating Crop, Weeds and Soil Surface with a Terrestrial LIDAR Sensor

    Get PDF
    In this study, the evaluation of the accuracy and performance of a light detection and ranging (LIDAR) sensor for vegetation using distance and reflection measurements aiming to detect and discriminate maize plants and weeds from soil surface was done. The study continues a previous work carried out in a maize field in Spain with a LIDAR sensor using exclusively one index, the height profile. The current system uses a combination of the two mentioned indexes. The experiment was carried out in a maize field at growth stage 12–14, at 16 different locations selected to represent the widest possible density of three weeds: Echinochloa crus-galli (L.) P.Beauv., Lamium purpureum L., Galium aparine L.and Veronica persica Poir.. A terrestrial LIDAR sensor was mounted on a tripod pointing to the inter-row area, with its horizontal axis and the field of view pointing vertically downwards to the ground, scanning a vertical plane with the potential presence of vegetation. Immediately after the LIDAR data acquisition (distances and reflection measurements), actual heights of plants were estimated using an appropriate methodology. For that purpose, digital images were taken of each sampled area. Data showed a high correlation between LIDAR measured height and actual plant heights (R 2 = 0.75). Binary logistic regression between weed presence/absence and the sensor readings (LIDAR height and reflection values) was used to validate the accuracy of the sensor. This permitted the discrimination of vegetation from the ground with an accuracy of up to 95%. In addition, a Canonical Discrimination Analysis (CDA) was able to discriminate mostly between soil and vegetation and, to a far lesser extent, between crop and weeds. The studied methodology arises as a good system for weed detection, which in combination with other principles, such as vision-based technologies, could improve the efficiency and accuracy of herbicide spraying
    corecore