283 research outputs found

    Special issue on 'Terrestrial laser scanning': editors' notes

    Get PDF
    In this editorial, we provide an overview of the content of the special issue on 'Terrestrial Laser Scanning'. The aim of this Special Issue is to bring together innovative developments and applications of terrestrial laser scanning (TLS), understood in a broad sense. Thus, although most contributions mainly involve the use of laser-based systems, other alternative technologies that also allow for obtaining 3D point clouds for the measurement and the 3D characterization of terrestrial targets, such as photogrammetry, are also considered. The 15 published contributions are mainly focused on the applications of TLS to the following three topics: TLS performance and point cloud processing, applications to civil engineering, and applications to plant characterization

    Fuji-SfM dataset: A collection of annotated images and point clouds for Fuji apple detection and location using structure-from-motion photogrammetry

    Get PDF
    The present dataset contains colour images acquired in a commercial Fuji apple orchard (Malus domestica Borkh. cv. Fuji) to reconstruct the 3D model of 11 trees by using structure-from-motion (SfM) photogrammetry. The data provided in this article is related to the research article entitled “Fruit detection and 3D location using instance segmentation neural networks and structure-from-motion photogrammetry” [1]. The Fuji-SfM dataset includes: (1) a set of 288 colour images and the corresponding annotations (apples segmentation masks) for training instance segmentation neural networks such as Mask-RCNN; (2) a set of 582 images defining a motion sequence of the scene which was used to generate the 3D model of 11 Fuji apple trees containing 1455 apples by using SfM; (3) the 3D point cloud of the scanned scene with the corresponding apple positions ground truth in global coordinates. With that, this is the first dataset for fruit detection containing images acquired in a motion sequence to build the 3D model of the scanned trees with SfM and including the corresponding 2D and 3D apple location annotations. This data allows the development, training, and test of fruit detection algorithms either based on RGB images, on coloured point clouds or on the combination of both types of data. Dades primàries associades a l'article http://hdl.handle.net/10459.1/68505This work was partly funded by the Secretaria d'Universitats i Recerca del Departament d'Empresa i Coneixement de la Generalitat de Catalunya (grant 2017 SGR 646), the Spanish Ministry of Economy and Competitiveness (project AGL2013-48297-C2-2-R) and the Spanish Ministry of Science, Innovation and Universities (project RTI2018-094222-B-I00). Part of the work was also developed within the framework of the project TEC2016-75976-R, financed by the Spanish Ministry of Economy, Industry and Competitiveness and the European Regional Development Fund (ERDF). The Spanish Ministry of Education is thanked for Mr. J. Gené’s pre-doctoral fellowships (FPU15/03355)

    AKFruitYield: Modular benchmarking and video analysis software for Azure Kinect cameras for fruit size and fruit yield estimation in apple orchards

    Get PDF
    AKFruitYield is a modular software that allows orchard data from RGB-D Azure Kinect cameras to be processed for fruit size and fruit yield estimation. Specifically, two modules have been developed: i) AK_SW_BENCHMARKER that makes it possible to apply different sizing algorithms and allometric yield prediction models to manually labelled color and depth tree images; and ii) AK_VIDEO_ANALYSER that analyses videos on which to automatically detect apples, estimate their size and predict yield at the plot or per hectare scale using the appropriate algorithms. Both modules have easy-to-use graphical interfaces and provide reports that can subsequently be used by other analysis tools.This work was partly funded by the Department of Research and Universities of the Generalitat de Catalunya (grants 2017 SGR 646) and by the Spanish Ministry of Science and Innovation/AEI/10.13039/501100011033/ERDF (grant RTI2018–094222-B-I00 [PAgFRUIT project] and PID2021–126648OB-I00 [PAgPROTECT project]). The Secretariat of Universities and Research of the Department of Business and Knowledge of the Generalitat de Catalunya and European Social Fund (ESF) are also thanked for financing Juan Carlos Miranda's pre-doctoral fellowship (2020 FI_B 00586). The work of Jordi Gené-Mola was supported by the Spanish Ministry of Universities through a Margarita Salas postdoctoral grant funded by the European Union - NextGenerationEU. The authors would also like to thank the Institut de Recerca i Tecnologia Agroalimentàries (IRTA) for allowing the use of their experimental fields, and in particular Dr. Luís Asín and Dr. Jaume Lordán who have contributed to the success of this work.info:eu-repo/semantics/publishedVersio

    Eye-safe lidar system for pesticide spray drift measurement

    Get PDF
    Spray drift is one of the main sources of pesticide contamination. For this reason, an accurate understanding of this phenomenon is necessary in order to limit its effects. Nowadays, spray drift is usually studied by using in situ collectors which only allow time-integrated sampling of specific points of the pesticide clouds. Previous research has demonstrated that the light detection and ranging (lidar) technique can be an alternative for spray drift monitoring. This technique enables remote measurement of pesticide clouds with high temporal and distance resolution. Despite these advantages, the fact that no lidar instrument suitable for such an application is presently available has appreciably limited its practical use. This work presents the first eye-safe lidar system specifically designed for the monitoring of pesticide clouds. Parameter design of this system is carried out via signal-to-noise ratio simulations. The instrument is based on a 3-mJ pulse-energy erbium-doped glass laser, an 80-mm diameter telescope, an APD optoelectronic receiver and optomechanically adjustable components. In first test measurements, the lidar system has been able to measure a topographic target located over 2 km away. The instrument has also been used in spray drift studies, demonstrating its capability to monitor the temporal and distance evolution of several pesticide clouds emitted by air-assisted sprayers at distances between 50 and 100 m.Peer ReviewedPostprint (published version

    Assessing automatic data processing algorithms for RGB-D cameras to predict fruit size and weight in apples

    Get PDF
    Data acquired using an RGB-D Azure Kinect DK camera were used to assess different automatic algorithms to estimate the size, and predict the weight of non-occluded and occluded apples. The programming of the algorithms included: (i) the extraction of images of regions of interest (ROI) using manual delimitation of bounding boxes or binary masks; (ii) estimating the lengths of the major and minor geometric axes for the purpose of apple sizing; and (iii) predicting the final weight by allometric modelling. In addition to the use of bounding boxes, the algorithms also allowed other post-mask settings (circles, ellipses and rotated rectangles) to be implemented, and different depth options (distance between the RGB-D camera and the fruits detected) for subsequent sizing through the application of the thin lens theory. Both linear and nonlinear allometric models demonstrated the ability to predict apple weight with a high degree of accuracy (R2 greater than 0.942 and RMSE < 16 g). With respect to non-occluded apples, the best weight predictions were achieved using a linear allometric model including both the major and minor axes of the apples as predictors. The mean absolute percentage error (MAPE) ranged from 5.1% to 5.7% with respective RMSE of 11.09 g and 13.02 g, depending to whether circles, ellipses, or bounding boxes were used to adjust fruit shape. The results were therefore promising and open up the possibility of implementing reliable in-field apple measurements in real time. Importantly, final weight prediction error and intermediate size estimation errors (from sizing algorithms) interact but in a way that is not easily quantifiable when weight allometric models with implicit prediction error are used. In addition, allometric models should be reviewed when applied to other apple cultivars, fruit development stages or even for different fruit growth conditions depending on canopy management.This work was partly funded by the Department of Research and Universities of the Generalitat de Catalunya (grants 2017, SGR 646 and 2021 LLAV 00088), by the Spanish Ministry of Science and Innovation / AEI/10.13039/501100011033 / ERDF (grants RTI2018-094222-B-I00 [PAgFRUIT project], PID2021-126648OB-I00 [PAgPROTECT project]) and by the Spanish Ministry of Science and Innovation / AEI/10.13039/501100011033 / European Union NextGeneration / PRTR (grantTED2021-131871B-I00 [DIGIFRUIT project]). We would also like to thank the Secretariat of Universities and Research of the Department of Business and Knowledge of the Generalitat de Catalunya and the European Social Fund (ESF) for financing Juan Carlos Miranda’s pre-doctoral fellowship (2020 FI_B 00586). The work of Jordi Gené-Mola was supported by the Spanish Ministry of Universities through a Margarita Salas postdoctoral grant funded by the European Union - NextGenerationEU.info:eu-repo/semantics/publishedVersio

    Characterisation of the LMS200 laser beam under the influence of blockage surfaces. Influence on 3D scanning of tree orchards

    Get PDF
    The geometric characterisation of tree orchards is a high-precision activity comprising the accurate measurement and knowledge of the geometry and structure of the trees. Different types of sensors can be used to perform this characterisation. In this work a terrestrial LIDAR sensor (SICK LMS200) whose emission source was a 905-nm pulsed laser diode was used. Given the known dimensions of the laser beam cross-section (with diameters ranging from 12 mm at the point of emission to 47.2 mm at a distance of 8 m), and the known dimensions of the elements that make up the crops under study (flowers, leaves, fruits, branches, trunks), it was anticipated that, for much of the time, the laser beam would only partially hit a foreground target/object, with the consequent problem of mixed pixels or edge effects. Understanding what happens in such situations was the principal objective of this work. With this in mind, a series of tests were set up to determine the geometry of the emitted beam and to determine the response of the sensor to different beam blockage scenarios. The main conclusions that were drawn from the results obtained were: (i) in a partial beam blockage scenario, the distance value given by the sensor depends more on the blocked radiant power than on the blocked surface area; (ii) there is an area that influences the measurements obtained that is dependent on the percentage of blockage and which ranges from 1.5 to 2.5 m with respect to the foreground target/object. If the laser beam impacts on a second target/object located within this range, this will affect the measurement given by the sensor. To interpret the information obtained from the point clouds provided by the LIDAR sensors, such as the volume occupied and the enclosing area, it is necessary to know the resolution and the process for obtaining this mesh of points and also to be aware of the problem associated with mixed pixels.This research was funded by FEDER (Fondo Europeo de Desarrollo Regional) and the CICYT (Comisión Interministerial de Ciencia y Tecnología, Spain), under Agreement N, AGL2002-04260-C04-02 and AGL2010-22304-C04-03. LMS200 and SICK are trademarks of SICK AG, Germany
    corecore