1,415 research outputs found

    Modeling Boundaries of Influence among Positional Uncertainty Fields

    Get PDF
    Within a CIS environment, the proper use of information requires the identification of the uncertainty associated with it. As such, there has been a substantial amount of research dedicated to describing and quantifying spatial data uncertainty. Recent advances in sensor technology and image analysis techniques are making image-derived geospatial data increasingly popular. Along with development in sensor and image analysis technologies have come departures from conventional point-by-point measurements. Current advancements support the transition from traditional point measures to novel techniques that allow the extraction of complex objects as single entities (e.g., road outlines, buildings). As the methods of data extraction advance, so too must the methods of estimating the uncertainty associated with the data. Not only will object uncertainties be modeled, but the connections between these uncertainties will also be estimated. The current methods for determining spatial accuracy for lines and areas typically involve defining a zone of uncertainty around the measured line, within which the actual line exists with some probability. Yet within the research community, the proper shape of this \u27uncertainty band\u27 is a topic with much dissent. Less contemplated is the manner in which such areas of uncertainty interact and influence one another. The development of positional error models, from the epsilon band and error band to the rigorous G-band, has focused on statistical models for estimating independent line features. Yet these models are not suited to model the interactions between uncertainty fields of adjacent features. At some point, these distributed areas of uncertainty around the features will intersect and overlap one another. In such instances, a feature\u27s uncertainty zone is defined not only by its measurement, but also by the uncertainty associated with neighboring features. It is therefore useful to understand and model the interactions between adjacent uncertainty fields. This thesis presents an analysis of estimation and modeling techniques of spatial uncertainty, focusing on the interactions among fields of positional uncertainty for image-derived linear features. Such interactions are assumed to occur between linear features derived from varying methods and sources, allowing the application of an independent error model. A synthetic uncertainty map is derived for a set of linear and aerial features, containing distributed fields of uncertainty for individual features. These uncertainty fields are shown to be advantageous for communication and user understanding, as well as being conducive to a variety of image processing techniques. Such image techniques can combine overlapping uncertainty fields to model the interaction between them. Deformable contour models are used to extract sets of continuous uncertainty boundaries for linear features, and are subsequently applied to extract a boundary of influence shared by two uncertainty fields. These methods are then applied to a complex scene of uncertainties, modeling the interactions of multiple objects within the scene. The resulting boundary uncertainty representations are unique from the previous independent error models which do not take neighboring influences into account. By modeling the boundary of interaction among the uncertainties of neighboring features, a more integrated approach to error modeling and analysis can be developed for complex spatial scenes and datasets

    Horizontal accuracy assessment of very high resolution Google Earth images in the city of Rome, Italy

    Get PDF
    Google Earth (GE) has recently become the focus of increasing interest and popularity among available online virtual globes used in scientific research projects, due to the free and easily accessed satellite imagery provided with global coverage. Nevertheless, the uses of this service raises several research questions on the quality and uncertainty of spatial data (e.g. positional accuracy, precision, consistency), with implications for potential uses like data collection and validation. This paper aims to analyze the horizontal accuracy of very high resolution (VHR) GE images in the city of Rome (Italy) for the years 2007, 2011, and 2013. The evaluation was conducted by using both Global Positioning System ground truth data and cadastral photogrammetric vertex as independent check points. The validation process includes the comparison of histograms, graph plots, tests of normality, azimuthal direction errors, and the calculation of standard statistical parameters. The results show that GE VHR imageries of Rome have an overall positional accuracy close to 1 m, sufficient for deriving ground truth samples, measurements, and large-scale planimetric maps

    Developing tools and models for evaluating geospatial data integration of official and VGI data sources

    Get PDF
    PhD ThesisIn recent years, systems have been developed which enable users to produce, share and update information on the web effectively and freely as User Generated Content (UGC) data (including Volunteered Geographic Information (VGI)). Data quality assessment is a major concern for supporting the accurate and efficient spatial data integration required if VGI is to be used alongside official, formal, usually governmental datasets. This thesis aims to develop tools and models for the purpose of assessing such integration possibilities. Initially, in order to undertake this task, geometrical similarity of formal and informal data was examined. Geometrical analyses were performed by developing specific programme interfaces to assess the positional, linear and polygon shape similarity among reference field survey data (FS); official datasets such as data from Ordnance Survey (OS), UK and General Directorate for Survey (GDS), Iraq agencies; and VGI information such as OpenStreetMap (OSM) datasets. A discussion of the design and implementation of these tools and interfaces is presented. A methodology has been developed to assess such positional and shape similarity by applying different metrics and standard indices such as the National Standard for Spatial Data Accuracy (NSSDA) for positional quality; techniques such as buffering overlays for linear similarity; and application of moments invariant for polygon shape similarity evaluations. The results suggested that difficulties exist for any geometrical integration of OSM data with both bench mark FS and formal datasets, but that formal data is very close to reference datasets. An investigation was carried out into contributing factors such as data sources, feature types and number of data collectors that may affect the geometrical quality of OSM data and consequently affect the integration process of OSM datasets with FS, OS and GDS. Factorial designs were undertaken in this study in order to develop and implement an experiment to discover the effect of these factors individually and the interaction between each of them. The analysis found that data source is the most significant factor that affects the geometrical quality of OSM datasets, and that there are interactions among all these factors at different levels of interaction. This work also investigated the possibility of integrating feature classification of official datasets such as data from OS and GDS geospatial data agencies, and informal datasets such as OSM information. In this context, two different models were developed. The first set of analysis included the evaluation of semantic integration of corresponding feature classifications of compared datasets. The second model was concerned with assessing the ability of XML schema matching of feature classifications of tested datasets. This initially involved a tokenization process in order to split up into single words classifications that were composed of multiple words. Subsequently, encoding feature classifications as XML schema trees was undertaken. The semantic similarity, data type similarity and structural similarity were measured between the nodes of compared schema trees. Once these three similarities had been computed, a weighted combination technique has been adopted in order to obtain the overall similarity. The findings of both sets of analysis were not encouraging as far as the possibility of effectively integrating feature classifications of VGI datasets, such as OSM information, and formal datasets, such as OS and GDS datasets, is concerned.Ministry of Higher Education and Scientific Research, Republic of Iraq

    Understanding the spatial database of the multipurpose GIS

    Get PDF

    Cartographic Algorithms: Problems of Implementation and Evaluation and the Impact of Digitising Errors

    Get PDF
    Cartographic generalisation remains one of the outstanding challenges in digital cartography and Geographical Information Systems (GIS). It is generally assumed that computerisation will lead to the removal of spurious variability introduced by the subjective decisions of individual cartographers. This paper demonstrates through an in‐depth study of a line simplification algorithm that computerisation introduces its own sources of variability. The algorithm, referred to as the Douglas‐Peucker algorithm in cartographic literature, has been widely used in image processing, pattern recognition and GIS for some 20 years. An analysis of this algorithm and study of some implementations in wide use identify the presence of variability resulting from the subjective decisions of software implementors. Spurious variability in software complicates the processes of evaluation and comparison of alternative algorithms for cartographic tasks. No doubt, variability in implementation could be removed by rigorous study and specification of algorithms. Such future work must address the presence of digitising error in cartographic data. Our analysis suggests that it would be difficult to adapt the Douglas‐Peucker algorithm to cope with digitising error without altering the method. Copyright © 1991, Wiley Blackwell. All rights reserve

    Implementation of Harbor Management Plans: The Next Generation of Mooring Management Incorporates GPS and GIS

    Get PDF
    This paper describes the development of a spatial decision-support and management system to be used for improved coastal and harbor management. The design combines two separate systems, one is the use of Global Positioning Systems (GPS), and the other is the use of a Geographic Information (GIS) (ARCVIEW II). This pilot project specifically examined the use of GPS and GIS to identify, map, and monitor individual mooring buoys in one recreational harbor in Narragansett, Rhode Island. GPS was used for collection of positional information and was compared to one traditional marine positioning device, Loran-C, and the variations between each method are discussed. Explanations on both the advantages and disadvantages of each technology for final incorporation and use as a harbor management tool are also offered. The combination of GPS and GIS is an efficient means for assessing, monitoring and regulating individual mooring buoys at the municipal level. The proposed geographical management system could also be useful for other coastal communities that are trying to develop more comprehensive and workable strategies for managing complex coastal environments. Similarly, this project may prove to be beneficial for both state and federal agencies with an interest in coastal resources
    corecore