509 research outputs found

    Developing tools and models for evaluating geospatial data integration of official and VGI data sources

    Get PDF
    PhD ThesisIn recent years, systems have been developed which enable users to produce, share and update information on the web effectively and freely as User Generated Content (UGC) data (including Volunteered Geographic Information (VGI)). Data quality assessment is a major concern for supporting the accurate and efficient spatial data integration required if VGI is to be used alongside official, formal, usually governmental datasets. This thesis aims to develop tools and models for the purpose of assessing such integration possibilities. Initially, in order to undertake this task, geometrical similarity of formal and informal data was examined. Geometrical analyses were performed by developing specific programme interfaces to assess the positional, linear and polygon shape similarity among reference field survey data (FS); official datasets such as data from Ordnance Survey (OS), UK and General Directorate for Survey (GDS), Iraq agencies; and VGI information such as OpenStreetMap (OSM) datasets. A discussion of the design and implementation of these tools and interfaces is presented. A methodology has been developed to assess such positional and shape similarity by applying different metrics and standard indices such as the National Standard for Spatial Data Accuracy (NSSDA) for positional quality; techniques such as buffering overlays for linear similarity; and application of moments invariant for polygon shape similarity evaluations. The results suggested that difficulties exist for any geometrical integration of OSM data with both bench mark FS and formal datasets, but that formal data is very close to reference datasets. An investigation was carried out into contributing factors such as data sources, feature types and number of data collectors that may affect the geometrical quality of OSM data and consequently affect the integration process of OSM datasets with FS, OS and GDS. Factorial designs were undertaken in this study in order to develop and implement an experiment to discover the effect of these factors individually and the interaction between each of them. The analysis found that data source is the most significant factor that affects the geometrical quality of OSM datasets, and that there are interactions among all these factors at different levels of interaction. This work also investigated the possibility of integrating feature classification of official datasets such as data from OS and GDS geospatial data agencies, and informal datasets such as OSM information. In this context, two different models were developed. The first set of analysis included the evaluation of semantic integration of corresponding feature classifications of compared datasets. The second model was concerned with assessing the ability of XML schema matching of feature classifications of tested datasets. This initially involved a tokenization process in order to split up into single words classifications that were composed of multiple words. Subsequently, encoding feature classifications as XML schema trees was undertaken. The semantic similarity, data type similarity and structural similarity were measured between the nodes of compared schema trees. Once these three similarities had been computed, a weighted combination technique has been adopted in order to obtain the overall similarity. The findings of both sets of analysis were not encouraging as far as the possibility of effectively integrating feature classifications of VGI datasets, such as OSM information, and formal datasets, such as OS and GDS datasets, is concerned.Ministry of Higher Education and Scientific Research, Republic of Iraq

    An Analytics Platform for Integrating and Computing Spatio-Temporal Metrics

    Get PDF
    In large-scale context-aware applications, a central design concern is capturing, managing and acting upon location and context data. The ability to understand the collected data and define meaningful contextual events, based on one or more incoming (contextual) data streams, both for a single and multiple users, is hereby critical for applications to exhibit location- and context-aware behaviour. In this article, we describe a context-aware, data-intensive metrics platform —focusing primarily on its geospatial support—that allows exactly this: to define and execute metrics, which capture meaningful spatio-temporal and contextual events relevant for the application realm. The platform (1) supports metrics definition and execution; (2) provides facilities for real-time, in-application actions upon metrics execution results; (3) allows post-hoc analysis and visualisation of collected data and results. It hereby offers contextual and geospatial data management and analytics as a service, and allow context-aware application developers to focus on their core application logic. We explain the core platform and its ecosystem of supporting applications and tools, elaborate the most important conceptual features, and discuss implementation realised through a distributed, micro-service based cloud architecture. Finally, we highlight possible application fields, and present a real-world case study in the realm of psychological health

    Earth Observation Open Science and Innovation

    Get PDF
    geospatial analytics; social observatory; big earth data; open data; citizen science; open innovation; earth system science; crowdsourced geospatial data; citizen science; science in society; data scienc

    Spatial ontologies for architectural heritage

    Get PDF
    Informatics and artificial intelligence have generated new requirements for digital archiving, information, and documentation. Semantic interoperability has become fundamental for the management and sharing of information. The constraints to data interpretation enable both database interoperability, for data and schemas sharing and reuse, and information retrieval in large datasets. Another challenging issue is the exploitation of automated reasoning possibilities. The solution is the use of domain ontologies as a reference for data modelling in information systems. The architectural heritage (AH) domain is considered in this thesis. The documentation in this field, particularly complex and multifaceted, is well-known to be critical for the preservation, knowledge, and promotion of the monuments. For these reasons, digital inventories, also exploiting standards and new semantic technologies, are developed by international organisations (Getty Institute, ONU, European Union). Geometric and geographic information is essential part of a monument. It is composed by a number of aspects (spatial, topological, and mereological relations; accuracy; multi-scale representation; time; etc.). Currently, geomatics permits the obtaining of very accurate and dense 3D models (possibly enriched with textures) and derived products, in both raster and vector format. Many standards were published for the geographic field or in the cultural heritage domain. However, the first ones are limited in the foreseen representation scales (the maximum is achieved by OGC CityGML), and the semantic values do not consider the full semantic richness of AH. The second ones (especially the core ontology CIDOC – CRM, the Conceptual Reference Model of the Documentation Commettee of the International Council of Museums) were employed to document museums’ objects. Even if it was recently extended to standing buildings and a spatial extension was included, the integration of complex 3D models has not yet been achieved. In this thesis, the aspects (especially spatial issues) to consider in the documentation of monuments are analysed. In the light of them, the OGC CityGML is extended for the management of AH complexity. An approach ‘from the landscape to the detail’ is used, for considering the monument in a wider system, which is essential for analysis and reasoning about such complex objects. An implementation test is conducted on a case study, preferring open source applications

    Towards evidence-based, GIS-driven national spatial health information infrastructure and surveillance services in the United Kingdom

    Get PDF
    The term "Geographic Information Systems" (GIS) has been added to MeSH in 2003, a step reflecting the importance and growing use of GIS in health and healthcare research and practices. GIS have much more to offer than the obvious digital cartography (map) functions. From a community health perspective, GIS could potentially act as powerful evidence-based practice tools for early problem detection and solving. When properly used, GIS can: inform and educate (professionals and the public); empower decision-making at all levels; help in planning and tweaking clinically and cost-effective actions, in predicting outcomes before making any financial commitments and ascribing priorities in a climate of finite resources; change practices; and continually monitor and analyse changes, as well as sentinel events. Yet despite all these potentials for GIS, they remain under-utilised in the UK National Health Service (NHS). This paper has the following objectives: (1) to illustrate with practical, real-world scenarios and examples from the literature the different GIS methods and uses to improve community health and healthcare practices, e.g., for improving hospital bed availability, in community health and bioterrorism surveillance services, and in the latest SARS outbreak; (2) to discuss challenges and problems currently hindering the wide-scale adoption of GIS across the NHS; and (3) to identify the most important requirements and ingredients for addressing these challenges, and realising GIS potential within the NHS, guided by related initiatives worldwide. The ultimate goal is to illuminate the road towards implementing a comprehensive national, multi-agency spatio-temporal health information infrastructure functioning proactively in real time. The concepts and principles presented in this paper can be also applied in other countries, and on regional (e.g., European Union) and global levels

    3D Cadastres Best Practices, Chapter 5: Visualization and New Opportunities

    Get PDF
    This paper proposes a discussion on opportunities offered by 3D visualization to improve the understanding and the analysis of cadastre data. It first introduce the rationale of having 3D visualization functionalities in the context of cadastre applications. Second the publication outline some basic concepts in 3D visualization. This section specially addresses the visualization pipeline as a driven classification schema to understand the steps leading to 3D visualization. In this section is also presented a brief review of current 3D standards and technologies. Next is proposed a summary of progress made in the last years in 3D cadastral visualization. For instance, user’s requirement, data and semiotics, and platforms are highlighted as main actions performed in the development of 3D cadastre visualization. This review could be perceived as an attempt to structure and emphasise the best practices in the domain of 3D cadastre visualization and as an inventory of issues that still need to be tackled. Finally, by providing a review on advances and trends in 3D visualization, the paper initiates a discussion and a critical analysis on the benefit of applying these new developments to cadastre domain. This final section discusses about enhancing 3D techniques as dynamic transparency and cutaway, 3D generalization, 3D visibility model, 3D annotation, 3D data and web platform, augmented reality, immersive virtual environment, 3D gaming, interaction techniques and time

    Spatial ontologies for architectural heritage

    Get PDF
    Informatics and artificial intelligence have generated new requirements for digital archiving, information, and documentation. Semantic interoperability has become fundamental for the management and sharing of information. The constraints to data interpretation enable both database interoperability, for data and schemas sharing and reuse, and information retrieval in large datasets. Another challenging issue is the exploitation of automated reasoning possibilities. The solution is the use of domain ontologies as a reference for data modelling in information systems. The architectural heritage (AH) domain is considered in this thesis. The documentation in this field, particularly complex and multifaceted, is well-known to be critical for the preservation, knowledge, and promotion of the monuments. For these reasons, digital inventories, also exploiting standards and new semantic technologies, are developed by international organisations (Getty Institute, ONU, European Union). Geometric and geographic information is essential part of a monument. It is composed by a number of aspects (spatial, topological, and mereological relations; accuracy; multi-scale representation; time; etc.). Currently, geomatics permits the obtaining of very accurate and dense 3D models (possibly enriched with textures) and derived products, in both raster and vector format. Many standards were published for the geographic field or in the cultural heritage domain. However, the first ones are limited in the foreseen representation scales (the maximum is achieved by OGC CityGML), and the semantic values do not consider the full semantic richness of AH. The second ones (especially the core ontology CIDOC – CRM, the Conceptual Reference Model of the Documentation Commettee of the International Council of Museums) were employed to document museums’ objects. Even if it was recently extended to standing buildings and a spatial extension was included, the integration of complex 3D models has not yet been achieved. In this thesis, the aspects (especially spatial issues) to consider in the documentation of monuments are analysed. In the light of them, the OGC CityGML is extended for the management of AH complexity. An approach ‘from the landscape to the detail’ is used, for considering the monument in a wider system, which is essential for analysis and reasoning about such complex objects. An implementation test is conducted on a case study, preferring open source applications

    Design and evaluation of a scalable Internet of Things backend for smart ports

    Get PDF
    Internet of Things (IoT) technologies, when adequately integrated, cater for logistics optimisation and operations' environmental impact monitoring, both key aspects for today's EU ports management. This article presents Obelisk, a scalable and multi-tenant cloud-based IoT integration platform used in the EU H2020 PortForward project. The landscape of IoT protocols being particularly fragmented, the first role of Obelisk is to provide uniform access to data originating from a myriad of devices and protocols. Interoperability is achieved through adapters that provide flexibility and evolvability in protocol and format mapping. Additionally, due to ports operating in a hub model with various interacting actors, a second role of Obelisk is to secure access to data. This is achieved through encryption and isolation for data transport and processing, respectively, while user access control is ensured through authentication and authorisation standards. Finally, as ports IoTisation will further evolve, a third need for Obelisk is to scale with the data volumes it must ingest and process. Platform scalability is achieved by means of a reactive micro-services based design. Those three essential characteristics are detailed in this article with a specific focus on how to achieve IoT data platform scalability. By means of an air quality monitoring use-case deployed in the city of Antwerp, the scalability of the platform is evaluated. The evaluation shows that the proposed reactive micro-service based design allows for horizontal scaling of the platform as well as for logarithmic time complexity of its service time
    • …
    corecore