23 research outputs found

    COMPARISON OF LOW COST PHOTOGRAMMETRIC SURVEY WITH TLS AND LEICA PEGASUS BACKPACK 3D MODELS

    Get PDF
    This paper considers Leica backpack and photogrammetric surveys of a mediaeval bastion in Padua, Italy. Furhtermore, terrestrial laser scanning (TLS) survey is considered in order to provide a state of the art reconstruction of the bastion. Despite control points are typically used to avoid deformations in photogrammetric surveys and ensure correct scaling of the reconstruction, in this paper a different approach is considered: this work is part of a project aiming at the development of a system exploiting ultra-wide band (UWB) devices to provide correct scaling of the reconstruction. In particular, low cost Pozyx UWB devices are used to estimate camera positions during image acquisitions. Then, in order to obtain a metric reconstruction, scale factor in the photogrammetric survey is estimated by comparing camera positions obtained from UWB measurements with those obtained from photogrammetric reconstruction. Compared with the TLS survey, the considered photogrammetric model of the bastion results in a RMSE of 21.9cm, average error 13.4cm, and standard deviation 13.5cm. Excluding the final part of the bastion left wing, where the presence of several poles make reconstruction more difficult, (RMSE) fitting error is 17.3cm, average error 11.5cm, and standard deviation 9.5cm. Instead, comparison of Leica backpack and TLS surveys leads to an average error of 4.7cm and standard deviation 0.6cm (4.2 cm and 0.3 cm, respectively, by excluding the final part of the left wing)

    COMPARING ACCURACY OF ULTRA-DENSE LASER SCANNER AND PHOTOGRAMMETRY POINT CLOUDS

    Get PDF
    Abstract. Massive point clouds have now become a common product from surveys using passive (photogrammetry) or active (laser scanning) technologies. A common question is what is the difference in terms of accuracy and precision of different technologies and processing options. In this work four ultra-dense point-clouds (PCs) from drone surveys are compared. Two PCs were created from imagery using a photogrammetric workflow, with and without ground control points. The laser scanning PCs were created with two drone flights with Riegl MiniVUX-3 lidar sensor, resulting in a point cloud with ~300 million points, and Riegl VUX-120 lidar sensor, leading to a point cloud with ~1 billion points. Relative differences between pairs from permutations of the four PCs are analysed calculating point-to-point distances over nearest neighbours. Eleven clipped PC subsets are used for this task. Ground control points (GCPs) are also used to assess residuals in the two photogrammetric point clouds in order to quantify the improvement from using GCPs vs not using GCPs when processing the images.Results related to comparing the two photogrammetric point clouds with and without GCPs show an improvement of average absolute position error from 0.12 m to 0.05 m and RMSE from 0.03 m to 0.01 m. Point-to-point distances over the PC pairs show that the closest point clouds are the two lidar clouds, with mean absolute distance (MAD), median absolute distance (MdAD) and standard deviation of distances (RMSE) respectively of 0.031 m, 0.025 m, 0.019 m; largest difference is between photogrammetric PC with GCPs, with 0.208 m, 0.206 m and 0.116 m, with the Z component providing most of the difference. Photogrammetry without GCP was more consistent with the lidar point clouds, with MAD of 0.064 m, MdAD of 0.048 m and RMSE value of 0.114 m

    Planning harvesting operations in forest environment: Remote sensing for decision support

    Get PDF
    The goal of this work is to assess a method for supporting decisions regarding identification of most suitable areas for two types of harvesting approaches in forestry: skyline vs. forwarder. The innovative aspect consists in simulating the choices done during the planning in forestry operations. To do so, remote sensing data from an aerial laser scanner were used to create a digital terrain model (DTM) of ground surface under vegetation cover. Features extracted from the DTM are used as input for several machine learning predictors. Features are slope, distance from nearest roadside, relative height from nearest roadside and roughness index. Training and validation is done using areas defined by experts in the study area. Results show a K value of almost 0.92 for the classifier with best results, random forest. Sensibility of each feature is assessed, showing that both distance and height difference from nearest road-side are more significant than overall DTM value

    COMPARISON OF VEGETATION INDICES FROM RPAS AND SENTINEL-2 IMAGERY FOR DETECTING PERMANENT PASTURES

    Get PDF
    Permanent pastures (PP) are defined as grasslands, which are not subjected to any tillage, but only to natural growth. They are important for local economies in the production of fodder and pastures (Ali et al. 2016). Under these definitions, a pasture is permanent when it is not under any crop-rotation, and its production is related to only irrigation, fertilization and mowing. Subsidy payments to landowners require monitoring activities to determine which sites can be considered PP. These activities are mainly done with visual field surveys by experienced personnel or lately also using remote sensing techniques. The regional agency for SPS subsidies, the Agenzia Veneta per i Pagamenti in Agricoltura (AVEPA) takes care of monitoring and control on behalf of the Veneto Region using remote sensing techniques. The investigation integrate temporal series of Sentinel-2 imagery with RPAS. Indeed, the testing area is specific region were the agricultural land is intensively cultivated for production of hay harvesting four times every year between May and October. The study goal of this study is to monitor vegetation presence and amount using the Normalized Difference Vegetation Index (NDVI), the Soil-adjusted Vegetation Index (SAVI), the Normalized Difference Water Index (NDWI), and the Normalized Difference Built Index (NDBI). The overall objective is to define for each index a set of thresholds to define if a pasture can be classified as PP or not and recognize the mowing

    ISPRS-SHY – OPEN DATA COLLECTOR FOR SUPPORTING GROUND TRUTH REMOTE SENSING ANALYSIS

    Get PDF
    Abstract. The 2021 Scientific Initiatives in ISPRS funded this project called ISRS-SHY from "SHare mY ground truth". It was intended as a collector of geographic data to support image analysis by sharing the necessary ground truth data needed for rigorous analysis. Regression and classification tasks that use remote sensing imagery necessarily require some control on the ground. The rationale behind this project is that often data on the ground is collected during projects, but is not valued by sharing across projects and teams globally. Internet has improved the way that data are shared, but there are still limitations related to discoverability of the data and its integrity. In other words, data are usually kept in local storage or, if in an accessible server, they are not documented and therefore they will not be picked up during search. In this initiative we created a portal using the Geonode environment to provide a hub for sharing data between research groups and openly to the community. The portal was then tested within the framework of three projects, with several participants each. The data that was uploaded and shared covered all types of geographic data formats and sizes. Further sharing was done in the context of teaching activities in higher education.The results show the importance of creating easy means to find data and share it across stakeholders. Qualitative results are discussed, and future steps will focus on quantitative assessment of the portal's usage, e.g. number of registered users in time, number of visits, and other key performance indicators. The results of this project are to be considered also in light of the effort in the scientific community to make research data available, i.e. FAIR - Findability, Accessibility, Interoperability, and Reuse of digital assets

    INITIAL EVALUATION OF 3D RECONSTRUCTION OF CLOSE OBJECTS WITH SMARTPHONE STEREO VISION

    Get PDF
    The Worldwide spread of relatively low cost mobile devices embedded with dual rear cameras enables the possibility of exploiting smartphone stereo vision for producing 3D models. Despite such idea is quite attractive, the small baseline between the two cameras restricts the depth discrimination ability of this kind of stereo vision systems. This paper presents the results obtained with a smartphone stereo vision system by using two rear cameras with different focal length: this operating condition clearly reduces the matchable area. Nevertheless, 3D reconstruction is still possible and the obtained results are evaluated for several camera-object distances

    A Comparison Between Uwb and Laser-based Pedestrian Tracking

    Get PDF
    Despite the availability of GNSS on consumer devices enabled personal navigation for most of the World population in most of the outdoor conditions, the problem of precise pedestrian positioning is still quite challenging when indoors or, more in general, in GNSS-challenging working conditions. Furthermore, the covid-19 pandemic also raised of pedestrian tracking, in any environment, but in particular indoors, where GNSS typically does not ensure sufficient accuracy for checking people distance. Motivated by the mentioned needs, this paper investigates the potential of UWB and LiDAR for pedestrian positioning and tracking. The two methods are compared in an outdoor case study, nevertheless, both are usable indoors as well. The obtained results show that the positioning performance of the LiDAR-based approach overcomes the UWB one, when the pedestrians are not obstructed by other objects in the LiDAR view. Nevertheless, the presence of obstructions causes gaps in the LiDAR-based tracking: instead, the combination of LiDAR and UWB can be used in order to reduce outages in the LiDAR-based solution, whereas the latter, when available, usually improves the UWB-based results.Peer reviewe

    BENCHMARK OF MACHINE LEARNING METHODS FOR CLASSIFICATION OF A SENTINEL-2 IMAGE

    No full text
    Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM) have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance) by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels) for testing and validating subsets. The classes used are the following: (i) urban (ii) sowable areas (iii) water (iv) tree plantations (v) grasslands. Validation is carried out using three different approaches: (i) using pixels from the training dataset (train), (ii) using pixels from the training dataset and applying cross-validation with the k-fold method (kfold) and (iii) using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train), the whole control dataset (full) and with k-fold cross-validation (kfold) with ten folds. Results from validation of predictions of the whole dataset (full) show the random forests method with the highest values; kappa index ranging from 0.55 to 0.42 respectively with the most and least number pixels for training. The two neural networks (multi layered perceptron and its ensemble) and the support vector machines - with default radial basis function kernel - methods follow closely with comparable performance
    corecore