9 research outputs found

    PENDEKATAN OTOMATISASI EVALUASI KUALITAS KELENGKAPAN PADA INFORMASI GEOSPASIAL

    Get PDF
    Program percepatan penyediaan informasi geospasial skala besar harus diimbangi dengan percepatan proses evaluasi kualitas. Evaluasi kualitas kelengkapan informasi geospasial skala besar masih dilakukan secara visual. Evaluasi dilakukan dengan pencocokan terhadap sumber data berupa data mozaik orthophoto. Evaluasi secara visual membutuhkan waktu yang cukup lama dan tidak akan mampu mengimbangi kegiatan percepatan penyediaan informasi geospasial skala besar. Penelitian ini bertujuan untuk mencari pendekatan evaluasi kualitas kelengkapan data spasial yang terstandar dan berjalan secara otomatis. Evaluasi dilakukan terhadap layer bangunan berbentuk poligon yang merupakan salah satu produk informasi geospasial dasar skala besar. Kelengkapan data diukur dari jumlah kelebihan data (commission) dan kekurangan data (omission). Pendekatan evaluasi kualitas dilakukan dengan pembangunan tools deteksi kelengkapan (omission) dan commission). Tools deteksi menggunakan metode pencocokan pada level unsur dengan membandingkan seluruh data uji terhadap referensi. Ada 4 skenario yang digunakan dalam menguji kemampuan tools evaluasi berdasarkan bentuk data pembanding dan matching option. Hasil penelitian ini menunjukan bahwa pembanding poligon memperoleh true commission dan true omission yang lebih baik dibanding menggunakan pembanding point (radius 2,5 meter) pada opsi pencocokan “intersect” dan “Have Their Center In”. Pendekatan evaluasi kelengkapan secara otomatis masih memiliki kesalahan deteksi omission dan commission. Untuk mendukung percepatan pengadaan informasi geospasial skala besar, diusulkan prosedur evaluasi kualitas semi otomatis yang menggabungkan deteksi otomatis dan validasi visual untuk memperoleh hasil evaluasi kualitas kelengkapan yang lebih baik

    Evaluating the Reliability, Coverage, and Added Value of Crowdsourced Traffic Incident Reports from Waze

    Get PDF
    Traffic managers strive to have the most accurate information on road conditions, normally by using sensors and cameras, to act effectively in response to incidents. The prevalence of crowdsourced traffic information that has become available to traffic managers brings hope and yet raises important questions about the proper strategy for allocating resources to monitoring methods. Although many researchers have indicated the potential value in crowdsourced data, it is crucial to quantitatively explore its validity and coverage as a new source of data. This research studied crowdsourced data from a smartphone navigation application called Waze to identify the characteristics of this social sensor and provide a comparison with some of the common sources of data in traffic management. Moreover, this work quantifies the potential additional coverage that Waze can provide to existing sources of the advanced traffic management system (ATMS). One year of Waze data was compared with the recorded incidents in the Iowa’s ATMS in the same timeframe. Overall, the findings indicated that the crowdsourced data stream from Waze is an invaluable source of information for traffic monitoring with broad coverage (covering 43.2% of ATMS crash and congestion reports), timely reporting (on average 9.8 minutes earlier than a probe-based alternative), and reasonable geographic accuracy. Waze reports currently make significant contributions to incident detection and were found to have potential for further complementing the ATMS coverage of traffic conditions. In addition to these findings, the crowdsourced data evaluation procedure in this work provides researchers with a flexible framework for data evaluation

    Reconstructing historical 3D city models

    Get PDF
    Historical maps are increasingly used for studying how cities have evolved over time, and their applications are multiple: understanding past outbreaks, urban morphology, economy, etc. However, these maps are usually scans of older paper maps, and they are therefore restricted to two dimensions. We investigate in this paper how historical maps can be ‘augmented’ with the third dimension so that buildings have heights, volumes, and roof shapes. The resulting 3D city models, also known as digital twins, have several benefits in practice since it is known that some spatial analyses are only possible in 3D: visibility studies, wind flow analyses, population estimation, etc. At this moment, reconstructing historical models is (mostly) a manual and very time-consuming operation, and it is plagued by inaccuracies in the 2D maps. In this paper, we present a new methodology to reconstruct 3D buildings from historical maps, we developed it with the aim of automating the process as much as possible, and we discuss the engineering decisions we made when implementing it. Our methodology uses extra datasets for height extraction, reuses the 3D models of buildings that still exist, and infers other buildings with procedural modelling. We have implemented and tested our methodology with real-world historical maps of European cities for different times between 1700 and 2000

    Conflating point of interest (POI) data: A systematic review of matching methods

    Full text link
    Point of interest (POI) data provide digital representations of places in the real world, and have been increasingly used to understand human-place interactions, support urban management, and build smart cities. Many POI datasets have been developed, which often have different geographic coverages, attribute focuses, and data quality. From time to time, researchers may need to conflate two or more POI datasets in order to build a better representation of the places in the study areas. While various POI conflation methods have been developed, there lacks a systematic review, and consequently, it is difficult for researchers new to POI conflation to quickly grasp and use these existing methods. This paper fills such a gap. Following the protocol of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), we conduct a systematic review by searching through three bibliographic databases using reproducible syntax to identify related studies. We then focus on a main step of POI conflation, i.e., POI matching, and systematically summarize and categorize the identified methods. Current limitations and future opportunities are discussed afterwards. We hope that this review can provide some guidance for researchers interested in conflating POI datasets for their research

    Adopting and incorporating crowdsourced traffic data in advanced transportation management systems

    Get PDF
    The widespread availability of internet and mobile devices has made crowdsourced reports a considerable source of information in many domains. Traffic managers, among others, have started using crowdsourced traffic incident reports (CSTIRs) to complement their existing sources of traffic monitoring. One of the prominent providers of CSTIRs is Waze. In this dissertation, first a quantitative analysis was conducted to evaluate Waze data in comparison to the existing sources of Iowa Department of Transportation. The potential added coverage that Waze can provide was also estimated. Redundant CSTIRs of the same incident were found to be one of the main challenges of Waze and CSTIRs in general. To leverage the value of the redundant reports and address this challenge, a state-of-the-art cluster analysis was implemented to reduce the redundancies, while providing further information about the incident. The clustered CSTIRs indicate the area impacted by an incident and provide a basis for estimating the reliability of the cluster. Furthermore, the challenges with clustering CSTIRs were described and recommendations were made for parameter tuning and cluster validation. Finally, an open-source software package was offered to implement the clustering method in near real-time. This software downloads and parses the raw data, implements clustering, tracks clusters, assigns a reliability score to clusters, and provides a RESTful API for information dissemination portals and web pages to use the data for multiple applications within the DOT and for the general public. With emerging technologies such as connected vehicles and vehicle-to-infrastructure (V2I) communication, CSTIRs and similar type of data are expected to grow. The findings and recommendations in this work, although implemented on Waze data, will be beneficial to the analysis of these emerging sources of data

    Abstraction and cartographic generalization of geographic user-generated content: use-case motivated investigations for mobile users

    Full text link
    On a daily basis, a conventional internet user queries different internet services (available on different platforms) to gather information and make decisions. In most cases, knowingly or not, this user consumes data that has been generated by other internet users about his/her topic of interest (e.g. an ideal holiday destination with a family traveling by a van for 10 days). Commercial service providers, such as search engines, travel booking websites, video-on-demand providers, food takeaway mobile apps and the like, have found it useful to rely on the data provided by other users who have commonalities with the querying user. Examples of commonalities are demography, location, interests, internet address, etc. This process has been in practice for more than a decade and helps the service providers to tailor their results based on the collective experience of the contributors. There has been also interest in the different research communities (including GIScience) to analyze and understand the data generated by internet users. The research focus of this thesis is on finding answers for real-world problems in which a user interacts with geographic information. The interactions can be in the form of exploration, querying, zooming and panning, to name but a few. We have aimed our research at investigating the potential of using geographic user-generated content to provide new ways of preparing and visualizing these data. Based on different scenarios that fulfill user needs, we have investigated the potential of finding new visual methods relevant to each scenario. The methods proposed are mainly based on pre-processing and analyzing data that has been offered by data providers (both commercial and non-profit organizations). But in all cases, the contribution of the data was done by ordinary internet users in an active way (compared to passive data collections done by sensors). The main contributions of this thesis are the proposals for new ways of abstracting geographic information based on user-generated content contributions. Addressing different use-case scenarios and based on different input parameters, data granularities and evidently geographic scales, we have provided proposals for contemporary users (with a focus on the users of location-based services, or LBS). The findings are based on different methods such as semantic analysis, density analysis and data enrichment. In the case of realization of the findings of this dissertation, LBS users will benefit from the findings by being able to explore large amounts of geographic information in more abstract and aggregated ways and get their results based on the contributions of other users. The research outcomes can be classified in the intersection between cartography, LBS and GIScience. Based on our first use case we have proposed the inclusion of an extended semantic measure directly in the classic map generalization process. In our second use case we have focused on simplifying geographic data depiction by reducing the amount of information using a density-triggered method. And finally, the third use case was focused on summarizing and visually representing relatively large amounts of information by depicting geographic objects matched to the salient topics emerged from the data

    Automatic evaluation of geospatial data quality using web services

    Get PDF
    El sector geomático vive un escenario de sobrecarga de datos donde casi todos los días se generan nuevas bases de datos geoespaciales (BDG). Sin embargo, hay poca o ninguna información sobre la calidad de estas BDG. En este contexto proponemos una solución para la evaluación automática de la calidad de los datos geoespaciales utilizando servicios web. Este enfoque está compuesto por procedimientos de evaluación automática para el control de calidad de la consistencia topológica, compleción y exactitud posicional según se especifican en el estándar brasileño. Algunos procedimientos de control requieren datos externos para fines de comparación. Por ello, en este trabajo, proporcionamos un conjunto de datos sintéticos generados según un diseño de experimentos con el objetivo de seleccionar los métodos más adecuados para encontrar correspondencias entre las BDG. La solución desarrollada tiene una capa de interoperabilidad que vincula usuarios y procedimientos automáticos utilizando la interfaz del Web Processing Services (WPS).The geomatics sector is going through a data overload scenario which new geospatial datasets are generated almost daily. However there are few or nothing information about the quality of these datasets, and they should be evaluated aiming to provide users some information about their quality. In this context we propose a solution for the automatic quality evaluation of geospatial datasets using the web services platform. This approach is compound by automatic evaluation procedures for quality control of topological consistency, completeness, and positional accuracy described in the Brazilian quality standard. Some procedures require an external dataset for comparison purposes. Hence we provide a set of synthetic datasets and apply over them an experimental design aiming to select suitable methods to find the correspondences between datasets. The solution has an interoperability tier that links users and automatic procedures using the standardized interface of Web Processing Services (WPS).Tesis Univ. Jaén. Departamento Ingeniería Cartográfica, Geodésica y Fotogrametría. Leída el 5 de junio de 2017
    corecore