20 research outputs found

    An Image-Based Real-Time Georeferencing Scheme for a UAV Based on a New Angular Parametrization

    Get PDF
    Simultaneous localization and mapping (SLAM) of a monocular projective camera installed on an unmanned aerial vehicle (UAV) is a challenging task in photogrammetry, computer vision, and robotics. This paper presents a novel real-time monocular SLAM solution for UAV applications. It is based on two steps: consecutive construction of the UAV path, and adjacent strip connection. Consecutive construction rapidly estimates the UAV path by sequentially connecting incoming images to a network of connected images. A multilevel pyramid matching is proposed for this step that contains a sub-window matching using high-resolution images. The sub-window matching increases the frequency of tie points by propagating locations of matched sub-windows that leads to a list of high-frequency tie points while keeping the execution time relatively low. A sparse bundle block adjustment (BBA) is employed to optimize the initial path by considering nuisance parameters. System calibration parameters with respect to global navigation satellite system (GNSS) and inertial navigation system (INS) are optionally considered in the BBA model for direct georeferencing. Ground control points and checkpoints are optionally included in the model for georeferencing and quality control. Adjacent strip connection is enabled by an overlap analysis to further improve connectivity of local networks. A novel angular parametrization based on spherical rotation coordinate system is presented to address the gimbal lock singularity of BBA. Our results suggest that the proposed scheme is a precise real-time monocular SLAM solution for a UAV.Peer reviewe

    Assessing Structural Complexity of Individual Scots Pine Trees by Comparing Terrestrial Laser Scanning and Photogrammetric Point Clouds

    Get PDF
    Structural complexity of trees is related to various ecological processes and ecosystem services. To support management for complexity, there is a need to assess the level of structural complexity objectively. The fractal-based box dimension (Db) provides a holistic measure of the structural complexity of individual trees. This study aimed to compare the structural complexity of Scots pine (Pinus sylvestris L.) trees assessed with Db that was generated with point cloud data from terrestrial laser scanning (TLS) and aerial imagery acquired with an unmanned aerial vehicle (UAV). UAV imagery was converted into point clouds with structure from motion (SfM) and dense matching techniques. TLS and UAV measured Db-values were found to differ from each other significantly (TLS: 1.51 ± 0.11, UAV: 1.59 ± 0.15). UAV measured Db-values were 5% higher, and the range was wider (TLS: 0.81–1.81, UAV: 0.23–1.88). The divergence between TLS and UAV measurements was found to be explained by the differences in the number and distribution of the points and the differences in the estimated tree heights and number of boxes in the Db-method. The average point density was 15 times higher with TLS than with UAV (TLS: 494,000, UAV 32,000 points/tree), and TLS received more points below the midpoint of tree heights (65% below, 35% above), while UAV did the opposite (22% below, 78% above). Compared to the field measurements, UAV underestimated tree heights more than TLS (TLS: 34 cm, UAV: 54 cm), resulting in more boxes of Db-method being needed (4–64%, depending on the box size). Forest structure (two thinning intensities, three thinning types, and a control group) significantly affected the variation of both TLS and UAV measured Db-values. Still, the divergence between the two approaches remained in all treatments. However, TLS and UAV measured Db-values were consistent, and the correlation between them was 75%

    Direct reflectance transformation methodology for drone-based hyperspectral imaging

    Get PDF
    Multi- and hyperspectral cameras on drones can be valuable tools in environmental monitoring. A significant shortcoming complicating their usage in quantitative remote sensing applications is insufficient robust radiometric calibration methods. In a direct reflectance transformation method, the drone is equipped with a camera and an irradiance sensor, allowing transformation of image pixel values to reflectance factors without ground reference data. This method requires the sensors to be calibrated with higher accuracy than what is usually required by the empirical line method (ELM), but consequently it offers benefits in robustness, ease of operation, and ability to be used on Beyond-Visual Line of Sight flights. The objective of this study was to develop and assess a drone-based workflow for direct reflectance transformation and implement it on our hyperspectral remote sensing system. A novel atmospheric correction method is also introduced, using two reference panels, but, unlike in the ELM, the correction is not directly affected by changes in the illumination. The sensor system consists of a hyperspectral camera (Rikola HSI, by Senop) and an onboard irradiance spectrometer (FGI AIRS), which were both given thorough radiometric calibrations. In laboratory tests and in a flight experiment, the FGI AIRS tilt-corrected irradiances had accuracy better than 1.9% at solar zenith angles up to 70◦. The system’s lowaltitude reflectance factor accuracy was assessed in a flight experiment using reflectance reference panels, where the normalized root mean square errors (NRMSE) were less than ±2% for the light panels (25% and 50%) and less than ±4% for the dark panels (5% and 10%). In the high-altitude images, taken at 100–150 m altitude, the NRMSEs without atmospheric correction were within 1.4%–8.7% for VIS bands and 2.0%–18.5% for NIR bands. Significant atmospheric effects appeared already at 50 m flight altitude. The proposed atmospheric correction was found to be practical and it decreased the high-altitude NRMSEs to 1.3%–2.6% for VIS bands and to 2.3%– 5.3% for NIR bands. Overall, the workflow was found to be efficient and to provide similar accuracies as the ELM, but providing operational advantages in such challenging scenarios as in forest monitoring, large-scale autonomous mapping tasks, and real-time applications. Tests in varying illumination conditions showed that the reflectance factors of the gravel and vegetation targets varied up to 8% between sunny and cloudy conditions due to reflectance anisotropy effects, while the direct reflectance workflow had better accuracy. This suggests that the varying illumination conditions have to be further accounted for in drone-based in quantitative remote sensing applications

    Multispectral Imagery Provides Benefits for Mapping Spruce Tree Decline Due to Bark Beetle Infestation When Acquired Late in the Season

    Get PDF
    Climate change is increasing pest insects’ ability to reproduce as temperatures rise, resulting in vast tree mortality globally. Early information on pest infestation is urgently needed for timely decisions to mitigate the damage. We investigated the mapping of trees that were in decline due to European spruce bark beetle infestation using multispectral unmanned aerial vehicles (UAV)-based imagery collected in spring and fall in four study areas in Helsinki, Finland. We used the Random Forest machine learning to classify trees based on their symptoms during both occasions. Our approach achieved an overall classification accuracy of 78.2% and 84.5% for healthy, declined and dead trees for spring and fall datasets, respectively. The results suggest that fall or the end of summer provides the most accurate tree vitality classification results. We also investigated the transferability of Random Forest classifiers between different areas, resulting in overall classification accuracies ranging from 59.3% to 84.7%. The findings of this study indicate that multispectral UAV-based imagery is capable of classifying tree decline in Norway spruce trees during a bark beetle infestation

    Paikkatiedon opetus ja koulumaailman käytännön tarpeet

    No full text
    Teema: Opetuksen maantiede - Nuorten maantiede

    Paikkatiedon opetus ja koulumaailman käytännön tarpeet

    No full text

    Visual-Inertial Odometry Using High Flying Altitude Drone Datasets

    No full text
    Positioning of unoccupied aerial systems (UAS, drones) is predominantly based on Global Navigation Satellite Systems (GNSS). Due to potential signal disruptions, redundant positioning systems are needed for reliable operation. The objective of this study was to implement and assess a redundant positioning system for high flying altitude drone operation based on visual-inertial odometry (VIO). A new sensor suite with stereo cameras and an inertial measurement unit (IMU) was developed, and a state-of-the-art VIO algorithm, VINS-Fusion, was used for localisation. Empirical testing of the system was carried out at flying altitudes of 40–100 m, which cover the common flight altitude range of outdoor drone operations. The performance of various implementations was studied, including stereo-visual-odometry (stereo-VO), monocular-visual-inertial-odometry (mono-VIO) and stereo-visual-inertial-odometry (stereo-VIO). The stereo-VIO provided the best results; the flight altitude of 40–60 m was the most optimal for the stereo baseline of 30 cm. The best positioning accuracy was 2.186 m for a 800 m-long trajectory. The performance of the stereo-VO degraded with the increasing flight altitude due to the degrading base-to-height ratio. The mono-VIO provided acceptable results, although it did not reach the performance level of the stereo-VIO. This work presented new hardware and research results on localisation algorithms for high flying altitude drones that are of great importance since the use of autonomous drones and beyond visual line-of-sight flying are increasing and will require redundant positioning solutions that compensate for potential disruptions in GNSS positioning. The data collected in this study are published for analysis and further studies

    A New Approach for Feeding Multispectral Imagery into Convolutional Neural Networks Improved Classification of Seedlings

    No full text
    Tree species information is important for forest management, especially in seedling stands. To mitigate the spectral admixture of understory reflectance with small and lesser foliaged seedling canopies, we proposed an image pre-processing step based on the canopy threshold (Cth) applied on drone-based multispectral images prior to feeding classifiers. This study focused on (1) improving the classification of seedlings by applying the introduced technique; (2) comparing the classification accuracies of the convolutional neural network (CNN) and random forest (RF) methods; and (3) improving classification accuracy by fusing vegetation indices to multispectral data. A classification of 5417 field-located seedlings from 75 sample plots showed that applying the Cth technique improved the overall accuracy (OA) of species classification from 75.7% to 78.5% on the Cth-affected subset of the test dataset in CNN method (1). The OA was more accurate in CNN (79.9%) compared to RF (68.3%) (2). Moreover, fusing vegetation indices with multispectral data improved the OA from 75.1% to 79.3% in CNN (3). Further analysis revealed that shorter seedlings and tensors with a higher proportion of Cth-affected pixels have negative impacts on the OA in seedling forests. Based on the obtained results, the proposed method could be used to improve species classification of single-tree detected seedlings in operational forest inventory
    corecore