31 research outputs found

    Map Conflation using Piecewise Linear Rubber-Sheeting Transformation between Layout and As-Built Plans in Kumasi Metropolis.

    Get PDF
    Context and backgroundAccurately integrating different geospatial data sets remain a challenging task because diverse geospatial data may have different accuracy levels and formats. Surveyors may typically create several arbitrary coordinate systems at local scales, which could lead to a variety of coordinate datasets causing such data to remain unconsolidated and in-homogeneous.Methodology:In this study, a piecewise rubber-sheeting conflation or geometric correction approach is used to accomplish transformations between such a pair of data for accurate data integration. Rubber-sheeting or piecewise linear homeomorphism is necessary because the different plans’ data would rarely match up correctly due to various reasons, such as the method of setting out from the design to the ground situation, and/or the non-accommodation of existing developments in the design.  Results:The conflation in ArcGIS using rubber sheet transformation achieved integration to a mean displacement error of 1.58 feet (0.48 meters.) from an initial mean displacement error of 71.46 feet (21.78 meters) an improvement of almost 98%. It is recommended that the rubber sheet technique gave a near exact point matching transformation and could be used to integrate zone plans with As-built surveys to address the challenges in correcting zonal plans in land records.  It is further recommended to investigate the incorporation of the use of textual information recognition and address geocoding to enable the use of on-site road names and plot numbers to detect points for matching

    Data integration in a modular and parallel grid-computing workflow

    Get PDF
    In the past decades a wide range of complex processes have been developed to solve specific geospatial data integration problems. As a drawback these complex processes are often not sufficiently transferable and interoperable. We propose modularisation of the whole data integration process into reusable, exchangeable, and multi-purpose web services to overcome these drawbacks. Both a high-level split of the process into subsequent modules such as pre-processing and feature matching is discussed as well as another fine-granular split within these modules. Thereby complex integration problems can be addressed by chaining selected services as part of a geo-processing workflow. Parallelization is needed for processing massive amounts of data or complex algorithms. In this paper the two concepts of task and data parallelization are compared and examples for their usage are given. The presented work provides vector data integration within grid-computing workflows of the German Spatial Data Infrastructure Grid (SDI-Grid) project.BMB

    ROADS DATA CONFLATION USING UPDATE HIGH RESOLUTION SATELLITE IMAGES

    Get PDF

    Automated conflation framework for integrating transportation big datasets

    Get PDF
    The constant merging of the data, commonly known as Conflation, from various sources, has been a vital part for any phase of development, be it planning, governing the existing system or to study the effects of any intervention in the system. Conflation allows enriching the existing data by integrating information through numerous sources available out there. This process becomes unusually critical because of the complexities these diverse data bring along such as, distinct accuracies with which data has been collected, projections, diverse nomenclature adaption, etc., and hence demands special attention. Although conflation has always been a topic of interest among researchers, this area has witnessed a significant enthusiasm recently due to current advancements in the data collection methods. Even though with this escalation in interest, the developed methods didn't justify the expansions field of data collections has made. Contemporary conflation algorithms still lack an efficient automated technique; most of the existing system demands some sort of human involvement for the analysis to achieve higher accuracy. Through this work, an effort has been made to establish a fully automated process to conflate the road segments of Missouri state from two big data sources. Taking the traditional conflation a step further, this study has also focused on enriching the road segments with traffic information like delay, volume, route safety, etc., by conflating with available traffic data and crash data. The accuracy of the conflation rate achieved through this algorithm was 80-95 percent for the different data sources. The final conflated layer gives detailed information about road networks coupled with traffic parameters like delay, travel time, route safety, travel time reliability, etc.by Neetu ChoubeyIncludes bibliographical reference

    Management and Conflation of Multiple Representations within an Open Federation Platform

    Get PDF
    Building up spatial data infrastructures involves the task of dealing with heterogeneous data sources which often bear inconsistencies and contradictions, respectively. One main reason for those inconsistencies emerges from the fact that one and the same real world phenomenon is often stored in multiple representations within different databases. It is the special goal of this paper to describe how the problems arising from multiple representations can be dealt with in spatial data infrastructures, especially focusing on the concepts that have been developed within the Nexus project of the University of Stuttgart that is implementing an open, federated infrastructure for context-aware applications. A main part of this contribution consists of explaining the efforts which have been conducted in order to solve the conflicts that occur between multiple representations within conflation or merging processes to achieve consolidated views on the underlying data for the applications

    USAGE OF VARIANCE IN DETERMINATION OF SINUOSITY INTERVALS FOR ROAD MATCHING

    Get PDF
    Geo-object matching is a process that identifies, classifies and matches the object pairs with regards to their maximum similarity in whole datasets. The matching process is used to handle updating, aligning, optimizing, integrating and/or quality measuring of road networks. There are several metrics used in matching algorithms such as Hausdorff distance, orientation, valence, sinuosity etc. Sinuosity is a ratio of actual length of a road to the straight length among start and end points of the same road. Sinuosity defines how curve a road is. In a matching process, it is necessary to determine the sinuosity thresholds or intervals firstly. Sinuosity intervals can be determined by several data classification methods such as equal interval, quantile, natural breaks and geometrical interval. Furthermore, the intervals determined by Ireland Transportation Agency can be used in parallel with this purpose. In this study, it was aimed to find out if the variance can be used in determination of sinuosity intervals as well. An experiment was conducted to compare all of the methods mentioned above. According to the results, the efficiency of the sinuosity intervals determined by the methods in road matching differs from 37.4% to 49.4%, and it seems that the intervals determined by the variance are the most efficient ones

    A Geospatial Cyberinfrastructure for Urban Economic Analysis and Spatial Decision-Making

    Get PDF
    abstract: Urban economic modeling and effective spatial planning are critical tools towards achieving urban sustainability. However, in practice, many technical obstacles, such as information islands, poor documentation of data and lack of software platforms to facilitate virtual collaboration, are challenging the effectiveness of decision-making processes. In this paper, we report on our efforts to design and develop a geospatial cyberinfrastructure (GCI) for urban economic analysis and simulation. This GCI provides an operational graphic user interface, built upon a service-oriented architecture to allow (1) widespread sharing and seamless integration of distributed geospatial data; (2) an effective way to address the uncertainty and positional errors encountered in fusing data from diverse sources; (3) the decomposition of complex planning questions into atomic spatial analysis tasks and the generation of a web service chain to tackle such complex problems; and (4) capturing and representing provenance of geospatial data to trace its flow in the modeling task. The Greater Los Angeles Region serves as the test bed. We expect this work to contribute to effective spatial policy analysis and decision-making through the adoption of advanced GCI and to broaden the application coverage of GCI to include urban economic simulations

    Trends and concerns in digital cartography

    Get PDF
    CISRG discussion paper ;

    W3C PROV to describe provenance at the dataset, feature and attribute levels in a distributed environment

    Get PDF
    Provenance, a metadata component referring to the origin and the processes undertaken to obtain a specific geographic digital feature or product, is crucial to evaluate the quality of spatial information and help in reproducing and replicating geospatial processes. However, the heterogeneity and complexity of the geospatial processes, which can potentially modify part or the complete content of datasets, make evident the necessity for describing geospatial provenance at dataset, feature and attribute levels. This paper presents the application of W3C PROV, which is a generic specification to express provenance records, for representing geospatial data provenance at these different levels. In particular, W3C PROV is applied to feature models, where geospatial phenomena are represented as individual features described with spatial (point, lines, polygons, etc.) and non-spatial (names, measures, etc.) attributes. This paper first analyses the potential for representing geospatial provenance in a distributed environment at the three levels of granularity using ISO 19115 and W3C PROV models. Next, an approach for applying the generic W3C PROV provenance model to the geospatial environment is presented. As a proof of concept, we provide an application of W3C PROV to describe geospatial provenance at the feature and attribute levels. The use case presented consists of a conflation of the U.S. Geological Survey dataset with the National Geospatial-Intelligence Agency dataset. Finally, an example of how to capture the provenance resulting from workflows and chain executions with PROV is also presented. The application uses a web processing service, which enables geospatial processing in a distributed system and allows to capture the provenance information based on the W3C PROV ontology at the feature and attribute levels
    corecore