2,634 research outputs found

    Ground Profile Recovery from Aerial 3D LiDAR-based Maps

    Get PDF
    The paper presents the study and implementation of the ground detection methodology with filtration and removal of forest points from LiDAR-based 3D point cloud using the Cloth Simulation Filtering (CSF) algorithm. The methodology allows to recover a terrestrial relief and create a landscape map of a forestry region. As the proof-of-concept, we provided the outdoor flight experiment, launching a hexacopter under a mixed forestry region with sharp ground changes nearby Innopolis city (Russia), which demonstrated the encouraging results for both ground detection and methodology robustness.Comment: 8 pages, FRUCT-2019 conferenc

    Seamless integration of above- and under-canopy unmanned aerial vehicle laser scanning for forest investigation

    Get PDF
    BackgroundCurrent automated forest investigation is facing a dilemma over how to achieve high tree- and plot-level completeness while maintaining a high cost and labor efficiency. This study tackles the challenge by exploring a new concept that enables an efficient fusion of aerial and terrestrial perspectives for digitizing and characterizing individual trees in forests through an Unmanned Aerial Vehicle (UAV) that flies above and under canopies in a single operation. The advantage of such concept is that the aerial perspective from the above-canopy UAV and the terrestrial perspective from the under-canopy UAV can be seamlessly integrated in one flight, thus grants the access to simultaneous high completeness, high efficiency, and low cost.ResultsIn the experiment, an approximately 0.5ha forest was covered in ca. 10min from takeoff to landing. The GNSS-IMU based positioning supports a geometric accuracy of the produced point cloud that is equivalent to that of the mobile mapping systems, which leads to a 2-4cm RMSE of the diameter at the breast height estimates, and a 4-7cm RMSE of the stem curve estimates.ConclusionsResults of the experiment suggested that the integrated flight is capable of combining the high completeness of upper canopies from the above-canopy perspective and the high completeness of stems from the terrestrial perspective. Thus, it is a solution to combine the advantages of the terrestrial static, the mobile, and the above-canopy UAV observations, which is a promising step forward to achieve a fully autonomous in situ forest inventory. Future studies should be aimed to further improve the platform positioning, and to automatize the UAV operation.Peer reviewe

    Enhancing methods for under-canopy unmanned aircraft system based photogrammetry in complex forests for tree diameter measurement

    Get PDF
    The application of Unmanned Aircraft Systems (UAS) beneath the forest canopy provides a potentially valuable alternative to ground-based measurement techniques in areas of dense canopy cover and undergrowth. This research presents results from a study of a consumer-grade UAS flown under the forest canopy in challenging forest and terrain conditions. This UAS was deployed to assess under-canopy UAS photogrammetry as an alternative to field measurements for obtaining stem diameters as well as ultra-high-resolution (~400,000 points/m2) 3D models of forest study sites. There were 378 tape-based diameter measurements collected from 99 stems in a native, unmanaged eucalyptus pulchella forest with mixed understory conditions and steep terrain. These measurements were used as a baseline to evaluate the accuracy of diameter measurements from under-canopy UAS-based photogrammetric point clouds. The diameter measurement accuracy was evaluated without the influence of a digital terrain model using an innovative tape-based method. A practical and detailed methodology is presented for the creation of these point clouds. Lastly, a metric called the Circumferential Completeness Index (CCI) was defined to address the absence of a clearly defined measure of point coverage when measuring stem diameters from forest point clouds. The measurement of the mean CCI is suggested for use in future studies to enable a consistent comparison of the coverage of forest point clouds using different sensors, point densities, trajectories, and methodologies. It was found that root-mean-squared-errors of diameter measurements were 0.011 m in Site 1 and 0.021 m in the more challenging Site 2. The point clouds in this study had a mean validated CCI of 0.78 for Site 1 and 0.7 for Site 2, with a mean unvalidated CCI of 0.86 for Site 1 and 0.89 for Site 2. The results in this study demonstrate that under-canopy UAS photogrammetry shows promise in becoming a practical alternative to traditional field measurements, however, these results are currently reliant upon the operator’s knowledge of photogrammetry and his/her ability to fly manually in object-rich environments. Future work should pursue solutions to autonomous operation, more complete point clouds, and a method for providing scale to point clouds when global navigation satellite systems are unavailable

    The Use of Agricultural Robots in Orchard Management

    Full text link
    Book chapter that summarizes recent research on agricultural robotics in orchard management, including Robotic pruning, Robotic thinning, Robotic spraying, Robotic harvesting, Robotic fruit transportation, and future trends.Comment: 22 page

    Orchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion

    Get PDF
    Agricultural mobile robots have great potential to effectively implement different agricultural tasks. They can save human labour costs, avoid the need for people having to perform risky operations and increase productivity. Automation and advanced sensing technologies can provide up-to-date information that helps farmers in orchard management. Data collected from on-board sensors on a mobile robot provide information that can help the farmer detect tree or fruit diseases or damage, measure tree canopy volume and monitor fruit development. In orchards, trees are natural landmarks providing suitable cues for mobile robot localisation and navigation as trees are nominally planted in straight and parallel rows. This thesis presents a novel tree trunk detection algorithm that detects trees and discriminates between trees and non-tree objects in the orchard using a camera and 2D laser scanner data fusion. A local orchard map of the individual trees was developed allowing the mobile robot to navigate to a specific tree in the orchard to perform a specific task such as tree inspection. Furthermore, this thesis presents a localisation algorithm that does not rely on GPS positions and depends only on the on-board sensors of the mobile robot without adding any artificial landmarks, respective tapes or tags to the trees. The novel tree trunk detection algorithm combined the features extracted from a low cost camera's images and 2D laser scanner data to increase the robustness of the detection. The developed algorithm used a new method to detect the edge points and determine the width of the tree trunks and non-tree objects from the laser scan data. Then a projection of the edge points from the laser scanner coordinates to the image plane was implemented to construct a region of interest with the required features for tree trunk colour and edge detection. The camera images were used to verify the colour and the parallel edges of the tree trunks and non-tree objects. The algorithm automatically adjusted the colour detection parameters after each test which was shown to increase the detection accuracy. The orchard map was constructed based on tree trunk detection and consisted of the 2D positions of the individual trees and non-tree objects. The map of the individual trees was used as an a priority map for mobile robot localisation. A data fusion algorithm based on an Extended Kalman filter was used for pose estimation of the mobile robot in different paths (midway between rows, close to the rows and moving around trees in the row) and different turns (semi-circle and right angle turns) required for tree inspection tasks. The 2D positions of the individual trees were used in the correction step of the Extended Kalman filter to enhance localisation accuracy. Experimental tests were conducted in a simulated environment and a real orchard to evaluate the performance of the developed algorithms. The tree trunk detection algorithm was evaluated under two broad illumination conditions (sunny and cloudy). The algorithm was able to detect the tree trunks (regular and thin tree trunks) and discriminate between trees and non-tree objects with a detection accuracy of 97% showing that the fusion of both vision and 2D laser scanner technologies produced robust tree trunk detection. The mapping method successfully localised all the trees and non-tree objects of the tested tree rows in the orchard environment. The mapping results indicated that the constructed map can be reliably used for mobile robot localisation and navigation. The localisation algorithm was evaluated against the logged RTK-GPS positions for different paths and headland turns. The average of the RMS of the position error in x, y coordinates and Euclidean distance were 0.08 m, 0.07 m and 0.103 m respectively, whilst the average of the RMS of the heading error was 3:32°. These results were considered acceptable while driving along the rows and when executing headland turns for the target application of autonomous mobile robot navigation and tree inspection tasks in orchards

    Methods and Applications of 3D Ground Crop Analysis Using LiDAR Technology: A Survey

    Get PDF
    Light Detection and Ranging (LiDAR) technology is positioning itself as one of the most effective non-destructive methods to collect accurate information on ground crop fields, as the analysis of the three-dimensional models that can be generated with it allows for quickly measuring several key parameters (such as yield estimations, aboveground biomass, vegetation indexes estimation, perform plant phenotyping, and automatic control of agriculture robots or machinery, among others). In this survey, we systematically analyze 53 research papers published between 2005 and 2022 that involve significant use of the LiDAR technology applied to the three-dimensional analysis of ground crops. Different dimensions are identified for classifying the surveyed papers (including application areas, crop species under study, LiDAR scanner technologies, mounting platform technologies, and the use of additional instrumentation and software tools). From our survey, we draw relevant conclusions about the use of LiDAR technologies, such as identifying a hierarchy of different scanning platforms and their frequency of use as well as establishing the trade-off between the economic costs of deploying LiDAR and the agronomically relevant information that effectively can be acquired. We also conclude that none of the approaches under analysis tackles the problem associated with working with multiple species with the same setup and configuration, which shows the need for instrument calibration and algorithmic fine tuning for an effective application of this technology.Fil: Micheletto, Matías Javier. Consejo Nacional de Investigaciones Cientificas y Tecnicas. Centro de Investigaciones y Transferencia Golfo San Jorge. Centro de Investigaciones y Transferencia Golfo San Jorge: Sede Caleta Olivia - Santa Cruz | Universidad Nacional de la Patagonia Austral. Centro de Investigaciones y Transferencia Golfo San Jorge. Centro de Investigaciones y Transferencia Golfo San Jorge: Sede Caleta Olivia - Santa Cruz | Universidad Nacional de la Patagonia "san Juan Bosco". Centro de Investigaciones y Transferencia Golfo San Jorge. Centro de Investigaciones y Transferencia Golfo San Jorge: Sede Caleta Olivia - Santa Cruz; ArgentinaFil: Chesñevar, Carlos Iván. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Santos, Rodrigo Martin. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; Argentina. Universidad Nacional del Sur. Departamento de Ingeniería Eléctrica y de Computadoras; Argentin

    Sub-Canopy Path Planning for Snow Depth Remote Sensing Using Autonomous Multi-UAVs

    Get PDF
    NASA studies indicate that 68% of the Earth’s fresh water exists in the form of snow andice. As such, analyzing global snow fall patterns is a useful tool with which scientists can extract the quantity of fresh water present in both the atmosphere and on the ground at any given time. The goal of this research is to leverage autonomous Unpiloted Aerial Vehicles (UAVs) to measure snow depth on the forest floor via sub-canopy flight. To enable such remote sensing missions, overhead Light Detection And Ranging (LiDAR) scans are used to aid in pre-determined UAV flight path planning. This results in autonomous sub-canopy missions that are able to avoid obstacles (e.g., trees, branches, and flora) and provide optimal LiDAR-based snow depth measurements. The A-star (A*) algorithm is the chosen path planning method for this research and is used to determine appropriate flight plans for multi-UAV missions.Ox Bow Farm, Kingman Farm, and Thompson Farm are evaluated and have sub-canopy tree density between 60-90%, 20-42% and 25-55% respectively. Proof-of-concept testing is performed at the University of New Hampshire Kingman Farm in Madbury, New Hampshire. Field tests show that this method is viable for under-canopy snow depth measurements when tree density is below 20%. In addition to the added efficiency of an autonomous multi-UAV mission (as opposed to a single, remotely operated UAVs), the resulting sub-canopy photogrammetry results, from which snow depth measurements can be extracted, are shown to provide improved ability to capture snow as compared to above-canopy flights
    corecore