356 research outputs found
Analysis of the floating car data of Turin public transportation system: first results
Global Navigation Satellite System (GNSS) sensors represent nowadays a mature technology, low-cost and efficient, to collect large spatio-temporal datasets (Geo Big Data) of vehicle movements in urban environments. Anyway, to extract the mobility information from such Floating Car Data (FCD), specific analysis methodologies are required. In this work, the first attempts to analyse the FCD of the Turin Public Transportation system are presented. Specifically, a preliminary methodology was implemented, in view of an automatic and possible real-time impedance map generation. The FCD acquired by all the vehicles of the Gruppo Torinese Trasporti (GTT) company in the month of April 2017 were thus processed to compute their velocities and a visualization approach based on Osmnx library was adopted. Furthermore, a preliminary temporal analysis was carried out, showing higher velocities in weekend days and not peak hours, as could be expected. Finally, a method to assign the velocities to the line network topology was developed and some tests carried out
3D modelling by low-cost range camera: software evaluation and comparison
The aim of this work is to present a comparison among three software applications currently available for the Occipital Structure SensorTM; all these software were developed for collecting 3D models of objects easily and in real-time with this structured light range camera. The SKANECT, itSeez3D and Scanner applications were thus tested: a DUPLOTM bricks construction was scanned with the three applications and the obtained models were compared to the model virtually generated with a standard CAD software, which served as reference.
The results demonstrate that all the software applications are generally characterized by the same level of geometric accuracy, which amounts to very few millimetres. However, the itSeez3D software, which requires a payment of $7 to export each model, represents surely the best solution, both from the point of view of the geometric accuracy and, mostly, at the level of the color restitution. On the other hand, Scanner, which is a free software, presents an accuracy comparable to that of itSeez3D. At the same time, though, the colors are often smoothed and not perfectly overlapped to the corresponding part of the model. Lastly, SKANECT is the software that generates the highest number of points, but it has also some issues with the rendering of the colors
A tool for crowdsourced building information modeling through low-cost range camera: preliminary demonstration and potential
Within the construction sector, Building Information Models (BIMs) are more and more used thanks to the several benefits that they offer in the design of new buildings and the management of the existing ones. Frequently, however, BIMs are not available for already built constructions, but, at the same time, the range camera technology provides nowadays a cheap, intuitive and effective tool for automatically collecting the 3D geometry of indoor environments. It is thus essential to find new strategies, able to perform the first step of the scan to BIM process, by extracting the geometrical information contained in the 3D models that are so easily collected through the range cameras. In this work, a new algorithm to extract planimetries from the 3D models of rooms acquired by means of a range camera is therefore presented. The algorithm was tested on two rooms, characterized by different shapes and dimensions, whose 3D models were captured with the Occipital Structure SensorTM. The preliminary results are promising: the developed algorithm is able to model effectively the 2D shape of the investigated rooms, with an accuracy level comprised in the range of 5 - 10 cm. It can be potentially used by non-expert users in the first step of the BIM generation, when the building geometry is reconstructed, for collecting crowdsourced indoor information in the frame of BIMs Volunteered Geographic Information (VGI) generation
FOSS4G date assessment on the isprs optical stereo satellite data. A benchmark for DSM generation
The ISPRS Working Group 4 Commission I on "Geometric and Radiometric Modelling of Optical Spaceborne Sensors", provides a benchmark dataset with several stereo data sets from space borne stereo sensors. In this work, the Worldview-1 and Cartosat-1 datasets are used, in order to test the Free and Open Source Software for Geospatial (FOSS4G) Digital Automatic Terrain Extractor (DATE), developed at Geodesy and Geomatics Division, University of Rome "La Sapienza", able to generate Digital Surface Models starting from optical and SAR satellite images. The accuracy in terms of NMAD ranges from 1 to 3 m for Wordview-1, and from 4 to 6 m for Cartosat-1. The results obtained show a general better 3D reconstruction for Worldview-1 DSMs with respect to Cartosat-1, and a different completeness level for the three analysed tiles, characterized by different slopes and land cover
Open source tool for DSMs generation from high resolution optical satellite imagery. Development and testing of an OSSIM plug-in
The fully automatic generation of digital surface models (DSMs) is still an open research issue. From recent years, computer vision algorithms have been introduced in photogrammetry in order to exploit their capabilities and efficiency in three-dimensional modelling. In this article, a new tool for fully automatic DSMs generation from high resolution satellite optical imagery is presented. In particular, a new iterative approach in order to obtain the quasi-epipolar images from the original stereopairs has been defined and deployed. This approach is implemented in a new Free and Open Source Software (FOSS) named Digital Automatic Terrain Extractor (DATE) developed at the Geodesy and Geomatics Division, University of Rome ‘La Sapienza’, and conceived as an Open Source Software Image Map (OSSIM) plug-in. DATE key features include: the epipolarity achievement in the object space, thanks to the images ground projection (Ground quasi-Epipolar Imagery (GrEI)) and the coarse-to-fine pyramidal scheme adopted; the use of computer vision algorithms in order to improve the processing efficiency and make the DSMs generation process fully automatic; the free and open source aspect of the developed code. The implemented plug-in was validated through two optical datasets, GeoEye-1 and the newest Pléiades-high resolution (HR) imagery, on Trento (Northern Italy) test site. The DSMs, generated on the basis of the metadata rational polynomial coefficients only, without any ground control point, are compared to a reference lidar in areas with different land use/land cover and morphology. The results obtained thanks to the developed workflow are good in terms of statistical parameters (root mean square error around 5 m for GeoEye-1 and around 4 m for Pléiades-HR imagery) and comparable with the results obtained through different software by other authors on the same test site, whereas in terms of efficiency DATE outperforms most of the available commercial software. These first achievements indicate good potential for the developed plug-in, which in a very near future will be also upgraded for synthetic aperture radar and tri-stereo optical imagery processing
Terrain classification by cluster analisys
The digital terrain modelling can be obtained by different methods belonging to two principal categories: deterministic methods (e.g. polinomial and spline functions interpolation, Fourier spectra) and stochastic methods (e.g. least squares collocation and fractals, i.e. the concept of selfsimilarity in probability).
To reach good resul ts, both the fi rst and the second methods need same initial suitable information which can be gained by a preprocessing of data named terrain classification.
In fact, the deterministic methods require to know how is the roughness of the terrain, related to the density of the data (elevations, deformations, etc.) used for the i nterpo 1 at ion, and the stochast i c methods ask for the knowledge of the autocorrelation function of the data.
Moreover, may be useful or very necessary to sp 1 it up the area under consideration in subareas homogeneous according to some parameters, because of different kinds of reasons (too much large initial set of data, so that they can't be processed togheter; very important discontinuities or singularities; etc.).
Last but not least, may be remarkable to test the type of distribution (normal or non-normal) of the subsets obtained by the preceding selection, because the statistical properties of the normal distribution are very important (e.g., least squares linear estimations are the same of maximum likelihood and minimum variance ones)
Upgrade of foss date plug-in: Implementation of a new radargrammetric DSM generation capability
Synthetic Aperture Radar (SAR) satellite systems may give important contribution in terms of Digital Surface Models (DSMs) generation considering their complete independence from logistic constraints on the ground and weather conditions. In recent years, the new availability of very high resolution SAR data (up to 20 cm Ground Sample Distance) gave a new impulse to radargrammetry and allowed new applications and developments. Besides, to date, among the software aimed to radargrammetric applications only few show as free and open source. It is in this context that it has been decided to widen DATE (Digital Automatic Terrain Extractor) plug-in capabilities and additionally include the possibility to use SAR imagery for DSM stereo reconstruction (i.e. radargrammetry), besides to the optical workflow already developed. DATE is a Free and Open Source Software (FOSS) developed at the Geodesy and Geomatics Division, University of Rome "La Sapienza", and conceived as an OSSIM (Open Source Software Image Map) plug-in. It has been developed starting from May 2014 in the framework of 2014 Google Summer of Code, having as early purpose a fully automatic DSMs generation from high resolution optical satellite imagery acquired by the most common sensors. Here, the results achieved through this new capability applied to two stacks (one ascending and one descending) of three TerraSAR-X images each, acquired over Trento (Northern Italy) testfield, are presented. Global accuracies achieved are around 6 metres. These first results are promising and further analysis are expected for a more complete assessment of DATE application to SAR imagery
A Procedure for High Resolution Satellite Imagery Quality Assessment
Data products generated from High Resolution Satellite Imagery (HRSI) are routinely evaluated during the so-called in-orbit test period, in order to verify if their quality fits the desired features and, if necessary, to obtain the image correction parameters to be used at the ground processing center. Nevertheless, it is often useful to have tools to evaluate image quality also at the final user level. Image quality is defined by some parameters, such as the radiometric resolution and its accuracy, represented by the noise level, and the geometric resolution and sharpness, described by the Modulation Transfer Function (MTF). This paper proposes a procedure to evaluate these image quality parameters; the procedure was implemented in a suitable software and tested on high resolution imagery acquired by the QuickBird, WorldView-1 and Cartosat-1 satellites
High resolution satellite imagery orientation accuracy assessment by leave-one-out method: accuracy index selection and accuracy uncertainty
The Leave-one-out cross-validation (LOOCV) was recently applied to the evaluation of High Resolution Satellite Imagery orientation accuracy and it has proven to be an effective method alternative with respect to the most common Hold-out-validation (HOV), in which ground points are split into two sets, Ground Control Points used for the orientation model estimation and Check Points used for the model accuracy assessment.
On the contrary, the LOOCV applied to HRSI implies the iterative application of the orientationmodel using all the known ground points as GCPs except one, different in each iteration, used as a CP. In every iteration the residual between imagery derived coordinates with respect to CP coordinates (prediction error of the model on CP coordinates) is calculated; the overall spatial accuracy achievable from the oriented image may be estimated by computing the usual RMSE or, better, a robust accuracy index like the mAD (median Absolute Deviation) of prediction errors on all the iterations.
In this way it is possible to overcome some drawbacks of the HOV: LOOCVis a reliable and robustmethod, not dependent on a particular set of CPs and on possible outliers, and it allows us to use each known ground point both as a GCP and as a CP, capitalising all the available ground information. This is a crucial problem in current situations, when the number of GCPs to be collected must be reduced as much as possible for obvious budget problems. The fundamentalmatter to deal with was to assess howwell LOOCVindexes (mADand RMSE) are able to represent the overall accuracy, that is howmuch they are stable and close to the corresponding HOV RMSE assumed as reference. Anyway, in the first tests the indexes comparison was performed in a qualitative way, neglecting their uncertainty. In this work the analysis has been refined on the basis of Monte Carlo simulations, starting from the actual accuracy of ground points and images coordinates, estimating the desired accuracy indexes (e.g. mAD and RMSE) in several trials, computing their uncertainty (standard deviation) and accounting for them in the comparison.
Tests were performed on a QuickBird Basic image implementing an ad hoc procedure within the SISAR software developed by the Geodesy and Geomatics Team at the Sapienza University of Rome. The LOOCV method with accuracy evaluated by mAD seemed promising and useful for practical case
GEDI data within google earth engine: preliminary analysis of a resource for inland surface water monitoring
Freshwater is one the most important renewable water resources of the planet but, due to climate change, surface freshwater available in the form of lakes, rivers, reservoirs, snow, and glaciers is becoming significantly threatened. As a result, surface water level monitoring is fundamental for understanding climatic changes and their impact on humans and biodiversity. This study evaluates the accuracy of the Global Ecosystem Dynamics Investigation (GEDI) LiDAR (Light Detection And Ranging) instrument for monitoring inland water levels. Four lakes in northern Italy were selected for comparison with gauge station measurements. To evaluate the accuracy of GEDI altimetric data, two steps of outlier removal are proposed. The first stage employs GEDI metadata to filter out footprints with very low accuracy. Then, a robust version of the standard 3σ test using a 3NMAD (Normalized Median Absolute Deviation) test is iteratively applied. After the outlier removal, which led to the elimination of between 80% to 87% of the data, the remaining footprints show an average standard deviation of 0.36 m, a mean NMAD of 0.38 m, and a Root Mean Square Error (RMSE) of 0.44 m, proving the promising potentialities of GEDI L2A altimetric data for inland water monitoring. © 2023 International Society for Photogrammetry and Remote Sensing
- …