57 research outputs found

    GeoXSLT : GML processing with XSLT and spatial extensions

    Get PDF
    This thesis claim that XSL Transformations combined with extensions can be used to process geodata encoded as GML. The assertion is backed up by the following deliverables: • A working proof-of-concept for an XSLT based transformation of spatial data. • Tests providing measurements of functionality and performance. • Argumentation that shows how and why this is a viable approach by discussion and practical examples. The paper concludes with a confirmation on the feasibility of the approach inline with the research objectives and findings provided by the deliverables

    The GPlates Geological Information Model and Markup Language

    Get PDF
    Understanding tectonic and geodynamic processes leading to the present-day configuration of the Earth involves studying data and models across a variety of disciplines, from geochemistry, geochronology and geophysics, to plate kinematics and mantle dynamics. All these data represent a 3-D spatial and 1-D temporal framework, a formalism which is not exploited by traditional spatial analysis tools. This is arguably a fundamental limit in both the rigour and sophistication in which datasets can be combined for geological deep time analysis, and often confines the extent of data analyses to the present-day configurations of geological objects. The GPlates Geological Information Model (GPGIM) represents a formal specification of geological and geophysical data in a time-varying plate tectonics context, used by the GPlates virtual-globe software. It provides a framework in which relevant types of geological data are attached to a common plate tectonic reference frame, allowing the data to be reconstructed in a time-dependent spatio-temporal plate reference frame. The GPlates Markup Language (GPML), being an extension of the open standard Geography Markup Language (GML), is both the modelling language for the GPGIM and an XML-based data format for the interoperable storage and exchange of data modelled by it. The GPlates software implements the GPGIM allowing researchers to query, visualise, reconstruct and analyse a rich set of geological data including numerical raster data. The GPGIM has recently been extended to support time-dependent geo-referenced numerical raster data by wrapping GML primitives into the time-dependent framework of the GPGIM. Coupled with GPlates' ability to reconstruct numerical raster data and import/export from/to a variety of raster file formats, as well as its handling of time-dependent plate boundary topologies, interoperability with geodynamic softwares is established, leading to a new generation of deep-time spatio-temporal data analysis and modelling, including a variety of new functionalities, such as 4-D data-mining

    Big Data Analytics for Earth Sciences: the EarthServer approach

    Get PDF
    Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures. Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations. However, Earth Science data and applications present specificities in terms of relevance of the geospatial information, wide heterogeneity of data models and formats, and complexity of processing. Therefore, Big Earth Data Analytics requires specifically tailored techniques and tools. The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets, built around a high performance array database technology, and the adoption and enhancement of standards for service interaction (OGC WCS and WCPS). The EarthServer solution, led by the collection of requirements from scientific communities and international initiatives, provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization. The result is demonstrated and validated through the development of lighthouse applications in the Marine, Geology, Atmospheric, Planetary and Cryospheric science domains

    LoD2BIM: A New Workflow for reconstructing and converting LoD2 Model to Information-rich IFC Model for Existing Building

    Get PDF
    Numerous existing buildings in Germany lack corresponding BIM models. Advanced analysis and visualization purposes, including indoor navigation, energy demand simulations, and building materials recycling on these buildings presents significant challenges, as only Level of Detail 2 (LoD2) models provided by local housing authorities, along with architectural drawings and textual descriptions of the internal structures, are available. This paper proposes a semi-manual workflow, LoD2BIM, to facilitate the efficient reconstruction and conversion of existing building models, particularly transforming Germany's official 3D Building Models in LoD2 into comprehensive, information-rich IFC models using open-source software. Through the integration of BaseX and CesiumJS, the visualization, querying and extracting of one LoD2 model from CityGML file become easy and feasible. Central to the LoD2BIM workflow is Blender, it acts as the primary platform for model editing, reconstruction and conversion. Users can expediently rebuild and refine LoD2 models utilizing Blender’s visual interface and the tool floorplan to blender3d, based on floorplans or detailed written descriptions. New Blender add-ons are developed to streamline the model import and conversion process. The BlenderBIM add-on is also modified to facilitate the automatic batch assignment of mesh as IFC classes. This workflow not only showcases the reconstruction and subsequent conversion of CityGML models into IFC models but also illustrates the potential to act as an alternative to commercial software in Building Information Modeling (BIM) editing

    A geo-database for potentially polluting marine sites and associated risk index

    Get PDF
    The increasing availability of geospatial marine data provides an opportunity for hydrographic offices to contribute to the identification of Potentially Polluting Marine Sites (PPMS). To adequately manage these sites, a PPMS Geospatial Database (GeoDB) application was developed to collect and store relevant information suitable for site inventory and geo-spatial analysis. The benefits of structuring the data to conform to the Universal Hydrographic Data Model (IHO S-100) and to use the Geographic Mark-Up Language (GML) for encoding are presented. A storage solution is proposed using a GML-enabled spatial relational database management system (RDBMS). In addition, an example of a risk index methodology is provided based on the defined data structure. The implementation of this example was performed using scripts containing SQL statements. These procedures were implemented using a cross-platform C++ application based on open-source libraries and called PPMS GeoDB Manager

    Spatial ontologies for architectural heritage

    Get PDF
    Informatics and artificial intelligence have generated new requirements for digital archiving, information, and documentation. Semantic interoperability has become fundamental for the management and sharing of information. The constraints to data interpretation enable both database interoperability, for data and schemas sharing and reuse, and information retrieval in large datasets. Another challenging issue is the exploitation of automated reasoning possibilities. The solution is the use of domain ontologies as a reference for data modelling in information systems. The architectural heritage (AH) domain is considered in this thesis. The documentation in this field, particularly complex and multifaceted, is well-known to be critical for the preservation, knowledge, and promotion of the monuments. For these reasons, digital inventories, also exploiting standards and new semantic technologies, are developed by international organisations (Getty Institute, ONU, European Union). Geometric and geographic information is essential part of a monument. It is composed by a number of aspects (spatial, topological, and mereological relations; accuracy; multi-scale representation; time; etc.). Currently, geomatics permits the obtaining of very accurate and dense 3D models (possibly enriched with textures) and derived products, in both raster and vector format. Many standards were published for the geographic field or in the cultural heritage domain. However, the first ones are limited in the foreseen representation scales (the maximum is achieved by OGC CityGML), and the semantic values do not consider the full semantic richness of AH. The second ones (especially the core ontology CIDOC – CRM, the Conceptual Reference Model of the Documentation Commettee of the International Council of Museums) were employed to document museums’ objects. Even if it was recently extended to standing buildings and a spatial extension was included, the integration of complex 3D models has not yet been achieved. In this thesis, the aspects (especially spatial issues) to consider in the documentation of monuments are analysed. In the light of them, the OGC CityGML is extended for the management of AH complexity. An approach ‘from the landscape to the detail’ is used, for considering the monument in a wider system, which is essential for analysis and reasoning about such complex objects. An implementation test is conducted on a case study, preferring open source applications

    XATA 2006: XML: aplicações e tecnologias associadas

    Get PDF
    Esta é a quarta conferência sobre XML e Tecnologias Associadas. Este evento tem-se tornado um ponto de encontro para quem se interessa pela temática e tem sido engraçado observar que os participantes gostam e tentam voltar nos anos posteriores. O grupo base de trabalho, a comissão científica, também tem vindo a ser alargada e todos os que têm colaborado com vontade e com uma qualidade crescente ano após ano. Pela quarta vez estou a redigir este prefácio e não consigo evitar a redacção de uma descrição da evolução da XATA ao longo destes quatro anos: 2003 Nesta "reunião", houve uma vintena de trabalhos submetidos, maioritariamente da autoria ou da supervisão dos membros que integravam a comissão organizadora o que não envalidou uma grande participação e acesas discussões. 2004 Houve uma participação mais forte da comunidade portuguesa mas ainda com números pouco expressivos. Nesta altura, apostou-se também numa forte participação da indústria, o que se traduziu num conjunto apreciável de apresentações de casos reais. Foi introduzido o processo de revisão formal dos trabalhos submetidos. 2005 Houve uma forte adesão nacional e internacional (Espanha e Brasil, o que para um evento onde se pretende privilegiar a língua portuguesa é ainda mais significativo). A distribuição geográfica em Portugal também aumentou, havendo mais instituições participantes. Automatizaram-se várias tarefas como o processo de submissão e de revisão de artigos. 2006 Nesta edição actual, e contrariamente ao que acontece no plano nacional, houve um crescimento significativo. Em todas as edições, tem sido objectivo da comissão organizadora, previlegiar a produção científica e dar voz ao máximo número de participantes. Nesse sentido, este ano, não haverá oradores convidados, sendo o programa integralmente preenchido com as apresentações dos trabalhos seleccionados. Apesar disso ainda houve uma taxa significativa de rejeições, principalmente devido ao elevado número de submissões. Foi introduzido também, nesta edição, um dia de tutoriais com o objectivo de fornecer competências mínimas a quem quer começar a trabalhar na área e também poder assistir de uma forma mais informada à conferência. Se analisarmos as temáticas, abordadas nas quatro conferências, percebemos que também aqui há uma evolução no sentido de uma maior maturidade. Enquanto que no primeiro encontro, os trabalhos abordavam problemas emergentes na utilização da tecnologia, no segundo encontro a grande incidência foi nos Web Services, uma nova tecnologia baseada em XML, no terceiro, a maior incidência foi na construção de repositórios, motores de pesquisa e linguagens de interrogação, nesta quarta edição há uma distribuição quase homogénea por todas as áreas temáticas tendo mesmo aparecido trabalhos que abordam aspectos científicos e tecnológicos da base da tecnologia XML. Desta forma, podemos concluir que a tecnologia sob o ponto de vista de utilização e aplicação está dominada e que a comunidade portuguesa começa a fazer contributos para a ciência de base.Microsoft

    Spatial ontologies for architectural heritage

    Get PDF
    Informatics and artificial intelligence have generated new requirements for digital archiving, information, and documentation. Semantic interoperability has become fundamental for the management and sharing of information. The constraints to data interpretation enable both database interoperability, for data and schemas sharing and reuse, and information retrieval in large datasets. Another challenging issue is the exploitation of automated reasoning possibilities. The solution is the use of domain ontologies as a reference for data modelling in information systems. The architectural heritage (AH) domain is considered in this thesis. The documentation in this field, particularly complex and multifaceted, is well-known to be critical for the preservation, knowledge, and promotion of the monuments. For these reasons, digital inventories, also exploiting standards and new semantic technologies, are developed by international organisations (Getty Institute, ONU, European Union). Geometric and geographic information is essential part of a monument. It is composed by a number of aspects (spatial, topological, and mereological relations; accuracy; multi-scale representation; time; etc.). Currently, geomatics permits the obtaining of very accurate and dense 3D models (possibly enriched with textures) and derived products, in both raster and vector format. Many standards were published for the geographic field or in the cultural heritage domain. However, the first ones are limited in the foreseen representation scales (the maximum is achieved by OGC CityGML), and the semantic values do not consider the full semantic richness of AH. The second ones (especially the core ontology CIDOC – CRM, the Conceptual Reference Model of the Documentation Commettee of the International Council of Museums) were employed to document museums’ objects. Even if it was recently extended to standing buildings and a spatial extension was included, the integration of complex 3D models has not yet been achieved. In this thesis, the aspects (especially spatial issues) to consider in the documentation of monuments are analysed. In the light of them, the OGC CityGML is extended for the management of AH complexity. An approach ‘from the landscape to the detail’ is used, for considering the monument in a wider system, which is essential for analysis and reasoning about such complex objects. An implementation test is conducted on a case study, preferring open source applications

    Mining XML documents with association rule algorithms

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2008Includes bibliographical references (leaves: 59-63)Text in English; Abstract: Turkish and Englishx, 63 leavesFollowing the increasing use of XML technology for data storage and data exchange between applications, the subject of mining XML documents has become more researchable and important topic. In this study, we considered the problem of Mining Association Rules between items in XML document. The principal purpose of this study is applying association rule algorithms directly to the XML documents with using XQuery which is a functional expression language that can be used to query or process XML data. We used three different algorithms; Apriori, AprioriTid and High Efficient AprioriTid. We give comparisons of mining times of these three apriori-like algorithms on XML documents using different support levels, different datasets and different dataset sizes

    Developing Feature Types and Related Catalogues for the Marine Community - Lessons from the MOTIIVE project.

    Get PDF
    MOTIIVE (Marine Overlays on Topography for annex II Valuation and Exploitation) is a project funded as a Specific Support Action (SSA) under the European Commission Framework Programme 6 (FP6) Aeronautics and Space Programme. The project started in September 2005 and finished in October 2007. The objective of MOTIIVE was to examine the methodology and cost benefit of using non-proprietary data standards. Specifically it considered the harmonisation requirements between the INSPIRE data component ‘elevation’ (terrestrial, bathymetric and coastal) and INSPIRE marine thematic data for ‘sea regions’, ‘oceanic spatial features’ and ‘coastal zone management areas’. This was examined in context of the requirements for interoperable information systems as required to realise the objectives of GMES for ‘global services’. The work draws particular conclusions on the realisation of Feature Types (ISO 19109) and Feature Type Catalogues (ISO 19110) in this respect. More information on MOTIIVE can be found at www.motiive.net
    • …
    corecore