34 research outputs found

    RDB to 의미 데이터 변환기법에 기반한 의미 데이터 생성 및 활용 방법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 김형주.RDB to RDF transformation is a semantic information extraction method that supports the Semantic Web. The direct mapping, one of the RDB to RDF transformation methods, is a representative mapping method recommended by the W3C. The direct mapping processes an automatic mapping from relational data to RDF data. Semantics preservation is an important property of the direct mapping to transform relational data to semantic data without information loss. However, existing direct mapping methods have problems that violate semantics preservation in specific cases. To comply with the semantics preservation, a hierarchical direct mapping method is provided. Rules of the hierarchical direct mapping are defined based on lemmas that represent features of semantic data transformation. A hierarchical semantic vocabulary is also defined to generate sound and precise semantic data. Next, this thesis also focused on developing an effective direct mapping to generate lightweight and intuitive semantic output data. Thus, the optimized hierarchical direct mapping is provided based on a relational meta-schema vocabulary. Rules of multi-column keys are defined to reduce repetitive constraint data generation problems. Rules for multiple keys are also defined because relational tables may contain multiple foreign keys or unique constraints that affect the output data size. The relational meta-schema vocabulary describes concepts of relational data and relationships among the concepts. The optimized hierarchical mapping method uses initially defined relational concepts from the vocabulary, and generates compact and intuitive semantic output data. Finally, a semantic metadata based information retrieval method is provided as semantic data utilization. Existing ranking methods do not have direct methods of evaluating the meaning of links. In this thesis, a semantic metadata based ranking approach is proposed to directly analyze the meaning of links by using a semantic Web data structure. The semantic Web data structure is built upon semantic metadata extracted from the Web data by using the RDB to RDF transformation method described above. The provided method evaluates the weight of the links for stratifying rank values based on their importance in the semantic Web data structure. The experimental results showed that the proposed mapping method performs semantics preserving RDB to RDF transformation and outputs smaller size semantic data with better quality, and the weighted semantic metadata based ranking approach outperforms existing methods.1 Introduction 1 1.1 Research Motivation 1 1.2 Research Contributions 4 1.3 Outline 9 2 Preliminaries 11 2.1 RDF 11 2.2 RDFS 14 2.3 RDFa 15 2.4 OWL 16 2.5 RDB to RDF Transformation 18 2.6 Terminologies 33 3 Semantics Preserving RDB to RDF Transformation 34 3.1 Motivation 35 3.2 Base Definitions of Predicates 37 3.3 Semantics Preservation 38 3.4 Problem Description 40 3.5 Mapping Rules 43 3.6 Evaluation 63 4 Repetitive Data Reduction Methods for RDB to RDF Transformation 69 4.1 Motivation 69 4.2 Base Definitions of Predicates 74 4.3 Mapping Multi-column Key 75 4.4 Mapping Multiple Keys 89 4.5 Relational Meta-schema Vocabulary 97 4.6 Evaluation 109 5 Utilization of RDB to RDF Transformation for Information Retrieval 119 5.1 Motivation 119 5.2 Previous Work 122 5.3 Semantic Metadata Annotation Using RDB to RDF transformation 125 5.4 Information Retrieval Based on Weighted Semantic Resource Rank 127 5.5 Evaluation 139 6 Conclusions and Future Work 144 6.1 Conclusions 144 6.2 Future Work 147 Appendices 149 A Proofs 149 A.1 Proof of Lemma 1 149 A.2 Proof of Lemma 2 150 A.3 Proof of Lemma 3 151 A.4 Proof of Lemma 4 153 A.5 Proof of Theorem 1 154 B Semi-automatic Semantic Data Publication 155 C Specifications 158 C.1 Hierarchical Semantic Vocabulary 158 C.2 Relational Meta-Schema Vocabulary 160 Bibliography 165 초 록 177Docto

    RDB to RDF 변환을 위한 의미 정보 보존 맵리듀스 처리

    Get PDF
    학위논문 (석사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2015. 8. 김형주.Today, most of the data on the web is stored in relational databases, which is called deep web. Semantic web is a movement to the next generation of the web, where all data are augmented with well-defined semantics and linked together in machine-readable format. RDB2RDF approaches have been proposed and standardized by W3C, which publishes relational data to semantic web by converting relational data into RDF formatted data. We propose a system that automatically transforms relational data into RDF data and creates OWL ontology based on the schema of database. Some approaches have been proposed, but most of them did not fully make use of schema information to extract rich semantics, nor did they experimented on large databases for performance. We utilize Hadoop framework in transformation process, which enables distributed system for scalability. We present mapping rules that implements augmented direct mapping to create local ontology with rich semantics. The results show that our system successfully transforms relational data into RDF data with OWL ontology, with satisfactory performance on large-sized databases.Abstract i Introduction 3 Related Work 7 2.1 Semantic ETL Systems 7 2.2 Hadoop MapReduce 8 2.3 Mapping Approaches 9 Mapping Rules 14 3.1 General Rule 1 19 3.2 General Rule 2 20 3.3 General Rule 3 20 3.4 General Rule 4 21 3.5 General Rule 5 21 3.6 Constraint Rule 1 22 3.7 Constraint Rule 2 22 3.8 Constraint Rule 3 23 3.9 Constraint Rule 4 24 3.10 Constraint Rule 5 24 3.11 Constraint Rule 6 25 3.12 Discussion 26 Our Approach 28 4.1 Preprocessing 28 4.1.1 Schema Caching Method 30 4.1.2 Relational Data 32 4.2 Hadoop Algorithm 33 Experiment 36 5.1 Ontology Extraction 37 5.2 Performance 38 5.3 Scalability 41 Conclusion 42 Reference 44 Appendix 46Maste

    Translation of Heterogeneous Databases into RDF, and Application to the Construction of a SKOS Taxonomical Reference

    Get PDF
    International audienceWhile the data deluge accelerates, most of the data produced remains locked in deep Web databases. For the linked open data to benefit from the potential represented by this huge amount of data, it is crucial to come up with solutions to expose heterogeneous databases as linked data. The xR2RML mapping language is an endeavor towards this goal: it is designed to map various types of databases to RDF, by flexibly adapting to heterogeneous query languages and data models while remaining free from any specific language. It extends R2RML, the W3C recommendation for the mapping of relational databases to RDF, and relies on RML for the handling of various data formats. In this paper we present xR2RML, we analyse data models of several modern databases as well as the format in which query results are returned , and we show how xR2RML translates any result data element into RDF, relying on existing languages such as XPath and JSONPath when necessary. We illustrate some features of xR2RML such as the generation of RDF collections and containers, and the ability to deal with mixed data formats. We also describe a real-world use case in which we applied xR2RML to build a SKOS thesaurus aimed at supporting studies on History of Zoology, Archaeozoology and Conservation Biology

    Translation of Relational and Non-Relational Databases into RDF with xR2RML

    Get PDF
    International audienceWith the growing amount of data being continuously produced, it is crucial to come up with solutions to expose data from ever more heterogeneous databases (e.g. NoSQL systems) as linked data.In this paper we present xR2RML, a language designed to describe the mapping of various types of databases to RDF. xR2RML flexibly adapts to heterogeneous query languages and data models while remaining free from any specific language or syntax. It extends R2RML, the W3C recommendation for the mapping of relational databases to RDF, and relies on RML for the handling of various data representation formats.We analyse data models of several modern databases as well as the format in which query results are returned, and we show that xR2RML can translate any data element within such results into RDF, relying on existing languages such as XPath and JSONPath if needed. We illustrate some features of xR2RML such as the generation of RDF collections and containers, and the ability to deal with mixed content

    Strategies for Managing Linked Enterprise Data

    Get PDF
    Data, information and knowledge become key assets of our 21st century economy. As a result, data and knowledge management become key tasks with regard to sustainable development and business success. Often, knowledge is not explicitly represented residing in the minds of people or scattered among a variety of data sources. Knowledge is inherently associated with semantics that conveys its meaning to a human or machine agent. The Linked Data concept facilitates the semantic integration of heterogeneous data sources. However, we still lack an effective knowledge integration strategy applicable to enterprise scenarios, which balances between large amounts of data stored in legacy information systems and data lakes as well as tailored domain specific ontologies that formally describe real-world concepts. In this thesis we investigate strategies for managing linked enterprise data analyzing how actionable knowledge can be derived from enterprise data leveraging knowledge graphs. Actionable knowledge provides valuable insights, supports decision makers with clear interpretable arguments, and keeps its inference processes explainable. The benefits of employing actionable knowledge and its coherent management strategy span from a holistic semantic representation layer of enterprise data, i.e., representing numerous data sources as one, consistent, and integrated knowledge source, to unified interaction mechanisms with other systems that are able to effectively and efficiently leverage such an actionable knowledge. Several challenges have to be addressed on different conceptual levels pursuing this goal, i.e., means for representing knowledge, semantic data integration of raw data sources and subsequent knowledge extraction, communication interfaces, and implementation. In order to tackle those challenges we present the concept of Enterprise Knowledge Graphs (EKGs), describe their characteristics and advantages compared to existing approaches. We study each challenge with regard to using EKGs and demonstrate their efficiency. In particular, EKGs are able to reduce the semantic data integration effort when processing large-scale heterogeneous datasets. Then, having built a consistent logical integration layer with heterogeneity behind the scenes, EKGs unify query processing and enable effective communication interfaces for other enterprise systems. The achieved results allow us to conclude that strategies for managing linked enterprise data based on EKGs exhibit reasonable performance, comply with enterprise requirements, and ensure integrated data and knowledge management throughout its life cycle

    Distributed Conversion of RDF Data to the Relational Model

    Get PDF
    Ve formátu RDF je ukládán rostoucí objem hodnotných informací. Relační databáze však stále přinášejí výhody z hlediska výkonu a množství podporovaných nástrojů. Představujeme RDF2X, nástroj pro automatický distribuovaný převod RDF dat do relačního modelu. Poskytujeme srovnání souvisejících přístupů, analyzujeme měření převodu 8.4 miliard RDF trojic a ilustrujeme přínos našeho nástroje na dvou případových studiích.The Resource Description Framework (RDF) stores a growing volume of valuable information. However, relational databases still provide advantages in terms of performance, familiarity and the number of supported tools. We present RDF2X, a tool for automatic distributed conversion of RDF datasets to the relational model. We provide a comparison of related approaches, report on the conversion of 8.4 billion RDF triples and demonstrate the contribution of our tool on case studies from two different domains

    Metadata-driven data integration

    Get PDF
    Cotutela: Universitat Politècnica de Catalunya i Université Libre de Bruxelles, IT4BI-DC programme for the joint Ph.D. degree in computer science.Data has an undoubtable impact on society. Storing and processing large amounts of available data is currently one of the key success factors for an organization. Nonetheless, we are recently witnessing a change represented by huge and heterogeneous amounts of data. Indeed, 90% of the data in the world has been generated in the last two years. Thus, in order to carry on these data exploitation tasks, organizations must first perform data integration combining data from multiple sources to yield a unified view over them. Yet, the integration of massive and heterogeneous amounts of data requires revisiting the traditional integration assumptions to cope with the new requirements posed by such data-intensive settings. This PhD thesis aims to provide a novel framework for data integration in the context of data-intensive ecosystems, which entails dealing with vast amounts of heterogeneous data, from multiple sources and in their original format. To this end, we advocate for an integration process consisting of sequential activities governed by a semantic layer, implemented via a shared repository of metadata. From an stewardship perspective, this activities are the deployment of a data integration architecture, followed by the population of such shared metadata. From a data consumption perspective, the activities are virtual and materialized data integration, the former an exploratory task and the latter a consolidation one. Following the proposed framework, we focus on providing contributions to each of the four activities. We begin proposing a software reference architecture for semantic-aware data-intensive systems. Such architecture serves as a blueprint to deploy a stack of systems, its core being the metadata repository. Next, we propose a graph-based metadata model as formalism for metadata management. We focus on supporting schema and data source evolution, a predominant factor on the heterogeneous sources at hand. For virtual integration, we propose query rewriting algorithms that rely on the previously proposed metadata model. We additionally consider semantic heterogeneities in the data sources, which the proposed algorithms are capable of automatically resolving. Finally, the thesis focuses on the materialized integration activity, and to this end, proposes a method to select intermediate results to materialize in data-intensive flows. Overall, the results of this thesis serve as contribution to the field of data integration in contemporary data-intensive ecosystems.Les dades tenen un impacte indubtable en la societat. La capacitat d’emmagatzemar i processar grans quantitats de dades disponibles és avui en dia un dels factors claus per l’èxit d’una organització. No obstant, avui en dia estem presenciant un canvi representat per grans volums de dades heterogenis. En efecte, el 90% de les dades mundials han sigut generades en els últims dos anys. Per tal de dur a terme aquestes tasques d’explotació de dades, les organitzacions primer han de realitzar una integració de les dades, combinantles a partir de diferents fonts amb l’objectiu de tenir-ne una vista unificada d’elles. Per això, aquest fet requereix reconsiderar les assumpcions tradicionals en integració amb l’objectiu de lidiar amb els requisits imposats per aquests sistemes de tractament massiu de dades. Aquesta tesi doctoral té com a objectiu proporcional un nou marc de treball per a la integració de dades en el context de sistemes de tractament massiu de dades, el qual implica lidiar amb una gran quantitat de dades heterogènies, provinents de múltiples fonts i en el seu format original. Per això, proposem un procés d’integració compost d’una seqüència d’activitats governades per una capa semàntica, la qual és implementada a partir d’un repositori de metadades compartides. Des d’una perspectiva d’administració, aquestes activitats són el desplegament d’una arquitectura d’integració de dades, seguit per la inserció d’aquestes metadades compartides. Des d’una perspectiva de consum de dades, les activitats són la integració virtual i materialització de les dades, la primera sent una tasca exploratòria i la segona una de consolidació. Seguint el marc de treball proposat, ens centrem en proporcionar contribucions a cada una de les quatre activitats. La tesi inicia proposant una arquitectura de referència de software per a sistemes de tractament massiu de dades amb coneixement semàntic. Aquesta arquitectura serveix com a planell per a desplegar un conjunt de sistemes, sent el repositori de metadades al seu nucli. Posteriorment, proposem un model basat en grafs per a la gestió de metadades. Concretament, ens centrem en donar suport a l’evolució d’esquemes i fonts de dades, un dels factors predominants en les fonts de dades heterogènies considerades. Per a l’integració virtual, proposem algorismes de rescriptura de consultes que usen el model de metadades previament proposat. Com a afegitó, considerem heterogeneïtat semàntica en les fonts de dades, les quals els algorismes de rescriptura poden resoldre automàticament. Finalment, la tesi es centra en l’activitat d’integració materialitzada. Per això proposa un mètode per a seleccionar els resultats intermedis a materialitzar un fluxes de tractament intensiu de dades. En general, els resultats d’aquesta tesi serveixen com a contribució al camp d’integració de dades en els ecosistemes de tractament massiu de dades contemporanisLes données ont un impact indéniable sur la société. Le stockage et le traitement de grandes quantités de données disponibles constituent actuellement l’un des facteurs clés de succès d’une entreprise. Néanmoins, nous assistons récemment à un changement représenté par des quantités de données massives et hétérogènes. En effet, 90% des données dans le monde ont été générées au cours des deux dernières années. Ainsi, pour mener à bien ces tâches d’exploitation des données, les organisations doivent d’abord réaliser une intégration des données en combinant des données provenant de sources multiples pour obtenir une vue unifiée de ces dernières. Cependant, l’intégration de quantités de données massives et hétérogènes nécessite de revoir les hypothèses d’intégration traditionnelles afin de faire face aux nouvelles exigences posées par les systèmes de gestion de données massives. Cette thèse de doctorat a pour objectif de fournir un nouveau cadre pour l’intégration de données dans le contexte d’écosystèmes à forte intensité de données, ce qui implique de traiter de grandes quantités de données hétérogènes, provenant de sources multiples et dans leur format d’origine. À cette fin, nous préconisons un processus d’intégration constitué d’activités séquentielles régies par une couche sémantique, mise en oeuvre via un dépôt partagé de métadonnées. Du point de vue de la gestion, ces activités consistent à déployer une architecture d’intégration de données, suivies de la population de métadonnées partagées. Du point de vue de la consommation de données, les activités sont l’intégration de données virtuelle et matérialisée, la première étant une tâche exploratoire et la seconde, une tâche de consolidation. Conformément au cadre proposé, nous nous attachons à fournir des contributions à chacune des quatre activités. Nous commençons par proposer une architecture logicielle de référence pour les systèmes de gestion de données massives et à connaissance sémantique. Une telle architecture consiste en un schéma directeur pour le déploiement d’une pile de systèmes, le dépôt de métadonnées étant son composant principal. Ensuite, nous proposons un modèle de métadonnées basé sur des graphes comme formalisme pour la gestion des métadonnées. Nous mettons l’accent sur la prise en charge de l’évolution des schémas et des sources de données, facteur prédominant des sources hétérogènes sous-jacentes. Pour l’intégration virtuelle, nous proposons des algorithmes de réécriture de requêtes qui s’appuient sur le modèle de métadonnées proposé précédemment. Nous considérons en outre les hétérogénéités sémantiques dans les sources de données, que les algorithmes proposés sont capables de résoudre automatiquement. Enfin, la thèse se concentre sur l’activité d’intégration matérialisée et propose à cette fin une méthode de sélection de résultats intermédiaires à matérialiser dans des flux des données massives. Dans l’ensemble, les résultats de cette thèse constituent une contribution au domaine de l’intégration des données dans les écosystèmes contemporains de gestion de données massivesPostprint (published version

    Querying and managing opm-compliant scientific workflow provenance

    Get PDF
    Provenance, the metadata that records the derivation history of scientific results, is important in scientific workflows to interpret, validate, and analyze the result of scientific computing. Recently, to promote and facilitate interoperability among heterogeneous provenance systems, the Open Provenance Model (OPM) has been proposed and has played an important role in the community. In this dissertation, to efficiently query and manage OPM-compliant provenance, we first propose a provenance collection framework that collects both prospective provenance, which captures an abstract workflow specification as a recipe for future data derivation and retrospective provenance, which captures past workflow execution and data derivation information. We then propose a relational database-based provenance system, called OPMPROV that stores, reasons, and queries prospective and retrospective provenance, which is OPM-compliant provenance. We finally propose OPQL, an OPM-level provenance query language, that is directly defined over the OPM model. An OPQL query takes an OPM graph as input and produces an OPM graph as output; therefore, OPQL queries are not tightly coupled to the underlying provenance storage strategies. Our provenance store, provenance collection framework, and provenance query language feature the native support of the OPM model
    corecore