11,144 research outputs found

    Analysis of the floating car data of Turin public transportation system: first results

    Get PDF
    Global Navigation Satellite System (GNSS) sensors represent nowadays a mature technology, low-cost and efficient, to collect large spatio-temporal datasets (Geo Big Data) of vehicle movements in urban environments. Anyway, to extract the mobility information from such Floating Car Data (FCD), specific analysis methodologies are required. In this work, the first attempts to analyse the FCD of the Turin Public Transportation system are presented. Specifically, a preliminary methodology was implemented, in view of an automatic and possible real-time impedance map generation. The FCD acquired by all the vehicles of the Gruppo Torinese Trasporti (GTT) company in the month of April 2017 were thus processed to compute their velocities and a visualization approach based on Osmnx library was adopted. Furthermore, a preliminary temporal analysis was carried out, showing higher velocities in weekend days and not peak hours, as could be expected. Finally, a method to assign the velocities to the line network topology was developed and some tests carried out

    Historical collaborative geocoding

    Full text link
    The latest developments in digital have provided large data sets that can increasingly easily be accessed and used. These data sets often contain indirect localisation information, such as historical addresses. Historical geocoding is the process of transforming the indirect localisation information to direct localisation that can be placed on a map, which enables spatial analysis and cross-referencing. Many efficient geocoders exist for current addresses, but they do not deal with the temporal aspect and are based on a strict hierarchy (..., city, street, house number) that is hard or impossible to use with historical data. Indeed historical data are full of uncertainties (temporal aspect, semantic aspect, spatial precision, confidence in historical source, ...) that can not be resolved, as there is no way to go back in time to check. We propose an open source, open data, extensible solution for geocoding that is based on the building of gazetteers composed of geohistorical objects extracted from historical topographical maps. Once the gazetteers are available, geocoding an historical address is a matter of finding the geohistorical object in the gazetteers that is the best match to the historical address. The matching criteriae are customisable and include several dimensions (fuzzy semantic, fuzzy temporal, scale, spatial precision ...). As the goal is to facilitate historical work, we also propose web-based user interfaces that help geocode (one address or batch mode) and display over current or historical topographical maps, so that they can be checked and collaboratively edited. The system is tested on Paris city for the 19-20th centuries, shows high returns rate and is fast enough to be used interactively.Comment: WORKING PAPE

    KERNEL FEATURE CROSS-CORRELATION FOR UNSUPERVISED QUANTIFICATION OF DAMAGE FROM WINDTHROW IN FORESTS

    Get PDF
    In this study estimation of tree damage from a windthrow event using feature detection on RGB high resolution imagery is assessed. An accurate quantitative assessment of the damage in terms of volume is important and can be done by ground sampling, which is notably expensive and time-consuming, or by manual interpretation and analyses of aerial images. This latter manual method also requires an expert operator investing time to manually detect damaged trees and apply relation functions between measures and volume which are also error-prone. In the proposed method RGB images with 0.2 m ground sample distance are analysed using an adaptive template matching method. Ten images corresponding to ten separate study areas are tested. A 13 7 13 pixels kernel with a simplified lin ear-feature representation of a cylinder is applied at different rotation angles (from 0\ub0 to 170\ub0 at 10\ub0 steps). The higher values of the normalized cross-correlation (NCC) of all angles are recorded for each pixel for each image. Several features are tested: percentiles (75, 80, 85, 90, 95, 99, max) and sum and number of pixels with NCC above 0.55. Three regression methods are tested, multiple regression (mr), support vector machines (SVM) with linear kernel and random forests. The first two methods gave the best results. The ground-truth was acquired by ground sampling, and total volumes of damaged trees are estimated for each of the 10 areas. Damaged volumes in the ten areas range from 3c1.8 7 102 m3 to 3c1.2 7 104 m3. Regression results show that smv regression method over the sum gives an R-squared of 0.92, a mean of absolute errors (MAE) of 255 m3 and a relative absolute error (RAE) of 34% using leave-one-out cross validation from the 10 observations. These initial results are encouraging and support further investigations on more finely tuned kernel template metrics to define an unsupervised image analysis process to automatically assess forest damage from windthrow

    Documenting Bronze Age Akrotiri on Thera using laser scanning, image-based modelling and geophysical prospection

    Get PDF
    The excavated architecture of the exceptional prehistoric site of Akrotiri on the Greek island of Thera/Santorini is endangered by gradual decay, damage due to accidents, and seismic shocks, being located on an active volcano in an earthquake-prone area. Therefore, in 2013 and 2014 a digital documentation project has been conducted with support of the National Geographic Society in order to generate a detailed digital model of Akrotiri’s architecture using terrestrial laser scanning and image-based modeling. Additionally, non-invasive geophysical prospection has been tested in order to investigate its potential to explore and map yet buried archaeological remains. This article describes the project and the generated results

    Learning Aerial Image Segmentation from Online Maps

    Get PDF
    This study deals with semantic segmentation of high-resolution (aerial) images where a semantic class label is assigned to each pixel via supervised classification as a basis for automatic map generation. Recently, deep convolutional neural networks (CNNs) have shown impressive performance and have quickly become the de-facto standard for semantic segmentation, with the added benefit that task-specific feature design is no longer necessary. However, a major downside of deep learning methods is that they are extremely data-hungry, thus aggravating the perennial bottleneck of supervised classification, to obtain enough annotated training data. On the other hand, it has been observed that they are rather robust against noise in the training labels. This opens up the intriguing possibility to avoid annotating huge amounts of training data, and instead train the classifier from existing legacy data or crowd-sourced maps which can exhibit high levels of noise. The question addressed in this paper is: can training with large-scale, publicly available labels replace a substantial part of the manual labeling effort and still achieve sufficient performance? Such data will inevitably contain a significant portion of errors, but in return virtually unlimited quantities of it are available in larger parts of the world. We adapt a state-of-the-art CNN architecture for semantic segmentation of buildings and roads in aerial images, and compare its performance when using different training data sets, ranging from manually labeled, pixel-accurate ground truth of the same city to automatic training data derived from OpenStreetMap data from distant locations. We report our results that indicate that satisfying performance can be obtained with significantly less manual annotation effort, by exploiting noisy large-scale training data.Comment: Published in IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSIN

    Benchmark of machine learning methods for classification of a Sentinel-2 image

    Get PDF
    Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM) have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance) by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels) for testing and validating subsets. The classes used are the following: (i) urban (ii) sowable areas (iii) water (iv) tree plantations (v) grasslands. Validation is carried out using three different approaches: (i) using pixels from the training dataset (train), (ii) using pixels from the training dataset and applying cross-validation with the k-fold method (kfold) and (iii) using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train), the whole control dataset (full) and with k-fold cross-validation (kfold) with ten folds. Results from validation of predictions of the whole dataset (full) show the random forests method with the highest values; kappa index ranging from 0.55 to 0.42 respectively with the most and least number pixels for training. The two neural networks (multi layered perceptron and its ensemble) and the support vector machines - with default radial basis function kernel - methods follow closely with comparable performanc

    Collection and integration of local knowledge and experience through a collective spatial analysis

    Get PDF
    This article discusses the convenience of adopting an approach of Collective Spatial Analysis in the P/PGIS processes, with the aim of improving the collection and integration of knowledge and local expertise in decision-making, mainly in the fields of planning and adopting territorial policies. Based on empirical evidence, as a result of the review of scientific articles from the Web of Science database, in which it is displayed how the knowledge and experience of people involved in decision-making supported by P/PGIS are collected and used, a prototype of a WEB-GSDSS application has been developed. This prototype allows a group of people to participate anonymously, in an asynchronous and distributed way, in a decision-making process to locate goods, services, or events through the convergence of their views. Via this application, two case studies for planning services in districts of Ecuador and Italy were carried out. Early results suggest that in P/PGIS local and external actors contribute their knowledge and experience to generate information that afterwards is integrated and analysed in the decision-making process. On the other hand, in a Collective Spatial Analysis, these actors analyse and generate information in conjunction with their knowledge and experience during the process of decision-making. We conclude that, although the Collective Spatial Analysis approach presented is in a subjective and initial stage, it does drive improvements in the collection and integration of knowledge and local experience, foremost among them is an interdisciplinary geo-consensusPeer ReviewedPostprint (published version

    Global-Scale Resource Survey and Performance Monitoring of Public OGC Web Map Services

    Full text link
    One of the most widely-implemented service standards provided by the Open Geospatial Consortium (OGC) to the user community is the Web Map Service (WMS). WMS is widely employed globally, but there is limited knowledge of the global distribution, adoption status or the service quality of these online WMS resources. To fill this void, we investigated global WMSs resources and performed distributed performance monitoring of these services. This paper explicates a distributed monitoring framework that was used to monitor 46,296 WMSs continuously for over one year and a crawling method to discover these WMSs. We analyzed server locations, provider types, themes, the spatiotemporal coverage of map layers and the service versions for 41,703 valid WMSs. Furthermore, we appraised the stability and performance of basic operations for 1210 selected WMSs (i.e., GetCapabilities and GetMap). We discuss the major reasons for request errors and performance issues, as well as the relationship between service response times and the spatiotemporal distribution of client monitoring sites. This paper will help service providers, end users and developers of standards to grasp the status of global WMS resources, as well as to understand the adoption status of OGC standards. The conclusions drawn in this paper can benefit geospatial resource discovery, service performance evaluation and guide service performance improvements.Comment: 24 pages; 15 figure

    Measuring delays for bicycles at signalized intersections using smartphone GPS tracking data

    Get PDF
    The article describes an application of global positioning system (GPS) tracking data (floating bike data) for measuring delays for cyclists at signalized intersections. For selected intersections, we used trip data collected by smartphone tracking to calculate the average delay for cyclists by interpolation between GPS locations before and after the intersection. The outcomes were proven to be stable for different strategies in selecting the GPS locations used for calculation, although GPS locations too close to the intersection tended to lead to an underestimation of the delay. Therefore, the sample frequency of the GPS tracking data is an important parameter to ensure that suitable GPS locations are available before and after the intersection. The calculated delays are realistic values, compared to the theoretically expected values, which are often applied because of the lack of observed data. For some of the analyzed intersections, however, the calculated delays lay outside of the expected range, possibly because the statistics assumed a random arrival rate of cyclists. This condition may not be met when, for example, bicycles arrive in platoons because of an upstream intersection. This justifies that GPS-based delays can form a valuable addition to the theoretically expected values
    corecore