30 research outputs found

    Extensible metadata management framework for personal data lake

    Get PDF
    Common Internet users today are inundated with a deluge of diverse data being generated and siloed in a variety of digital services, applications, and a growing body of personal computing devices as we enter the era of the Internet of Things. Alongside potential privacy compromises, users are facing increasing difficulties in managing their data and are losing control over it. There appears to be a de facto agreement in business and scientific fields that there is critical new value and interesting insight that can be attained by users from analysing their own data, if only it can be freed from its silos and combined with other data in meaningful ways. This thesis takes the point of view that users should have an easy-to-use modern personal data management solution that enables them to centralise and efficiently manage their data by themselves, under their full control, for their best interests, with minimum time and efforts. In that direction, we describe the basic architecture of a management solution that is designed based on solid theoretical foundations and state of the art big data technologies. This solution (called Personal Data Lake - PDL) collects the data of a user from a plurality of heterogeneous personal data sources and stores it into a highly-scalable schema-less storage repository. To simplify the user-experience of PDL, we propose a novel extensible metadata management framework (MMF) that: (i) annotates heterogeneous data with rich lineage and semantic metadata, (ii) exploits the garnered metadata for automating data management workflows in PDL – with extensive focus on data integration, and (iii) facilitates the use and reuse of the stored data for various purposes by querying it on the metadata level either directly by the user or through third party personal analytics services. We first show how the proposed MMF is positioned in PDL architecture, and then describe its principal components. Specifically, we introduce a simple yet effective lineage manager for tracking the provenance of personal data in PDL. We then introduce an ontology-based data integration component called SemLinker which comprises two new algorithms; the first concerns generating graph-based representations to express the native schemas of (semi) structured personal data, and the second algorithm metamodels the extracted representations to a common extensible ontology. SemLinker outputs are utilised by MMF to generate user-tailored unified views that are optimised for querying heterogeneous personal data through low-level SPARQL or high-level SQL-like queries. Next, we introduce an unsupervised automatic keyphrase extraction algorithm called SemCluster that specialises in extracting thematically important keyphrases from unstructured data, and associating each keyphrase with ontological information drawn from an extensible WordNet-based ontology. SemCluster outputs serve as semantic metadata and are utilised by MMF to annotate unstructured contents in PDL, thus enabling various management functionalities such as relationship discovery and semantic search. Finally, we describe how MMF can be utilised to perform holistic integration of personal data and jointly querying it in native representations

    Improving Schema Mapping by Exploiting Domain Knowledge

    Get PDF
    This dissertation addresses the problem of semi-automatically creating schema mappings. The need for developing schema mappings is a pervasive problem in many integration scenarios. Although the problem is well-known and a large body of work exists in the area, the development of schema mappings is today largely performed manually in industrial integration scenarios. In this thesis an approach for the semi-automatic creation of high quality schema mappings is developed

    Semantic Model Alignment for Business Process Integration

    Get PDF
    Business process models describe an enterprise’s way of conducting business and in this form the basis for shaping the organization and engineering the appropriate supporting or even enabling IT. Thereby, a major task in working with models is their analysis and comparison for the purpose of aligning them. As models can differ semantically not only concerning the modeling languages used, but even more so in the way in which the natural language for labeling the model elements has been applied, the correct identification of the intended meaning of a legacy model is a non-trivial task that thus far has only been solved by humans. In particular at the time of reorganizations, the set-up of B2B-collaborations or mergers and acquisitions the semantic analysis of models of different origin that need to be consolidated is a manual effort that is not only tedious and error-prone but also time consuming and costly and often even repetitive. For facilitating automation of this task by means of IT, in this thesis the new method of Semantic Model Alignment is presented. Its application enables to extract and formalize the semantics of models for relating them based on the modeling language used and determining similarities based on the natural language used in model element labels. The resulting alignment supports model-based semantic business process integration. The research conducted is based on a design-science oriented approach and the method developed has been created together with all its enabling artifacts. These results have been published as the research progressed and are presented here in this thesis based on a selection of peer reviewed publications comprehensively describing the various aspects

    Schema matching in a peer-to-peer database system

    Get PDF
    Includes bibliographical references (p. 112-118).Peer-to-peer or P2P systems are applications that allow a network of peers to share resources in a scalable and efficient manner. My research is concerned with the use of P2P systems for sharing databases. To allow data mediation between peers' databases, schema mappings need to exist, which are mappings between semantically equivalent attributes in different peers' schemas. Mappings can either be defined manually or found semi-automatically using a technique called schema matching. However, schema matching has not been used much in dynamic environments, such as P2P networks. Therefore, this thesis investigates how to enable effective semi-automated schema matching within a P2P network

    XML-based approaches for the integration of heterogeneous bio-molecular data

    Get PDF
    Background: The today's public database infrastructure spans a very large collection of heterogeneous biological data, opening new opportunities for molecular biology, bio-medical and bioinformatics research, but raising also new problems for their integration and computational processing. Results: In this paper we survey the most interesting and novel approaches for the representation, integration and management of different kinds of biological data by exploiting XML and the related recommendations and approaches. Moreover, we present new and interesting cutting edge approaches for the appropriate management of heterogeneous biological data represented through XML. Conclusion: XML has succeeded in the integration of heterogeneous biomolecular information, and has established itself as the syntactic glue for biological data sources. Nevertheless, a large variety of XML-based data formats have been proposed, thus resulting in a difficult effective integration of bioinformatics data schemes. The adoption of a few semantic-rich standard formats is urgent to achieve a seamless integration of the current biological resources. </p

    Exploiting general-purpose background knowledge for automated schema matching

    Full text link
    The schema matching task is an integral part of the data integration process. It is usually the first step in integrating data. Schema matching is typically very complex and time-consuming. It is, therefore, to the largest part, carried out by humans. One reason for the low amount of automation is the fact that schemas are often defined with deep background knowledge that is not itself present within the schemas. Overcoming the problem of missing background knowledge is a core challenge in automating the data integration process. In this dissertation, the task of matching semantic models, so-called ontologies, with the help of external background knowledge is investigated in-depth in Part I. Throughout this thesis, the focus lies on large, general-purpose resources since domain-specific resources are rarely available for most domains. Besides new knowledge resources, this thesis also explores new strategies to exploit such resources. A technical base for the development and comparison of matching systems is presented in Part II. The framework introduced here allows for simple and modularized matcher development (with background knowledge sources) and for extensive evaluations of matching systems. One of the largest structured sources for general-purpose background knowledge are knowledge graphs which have grown significantly in size in recent years. However, exploiting such graphs is not trivial. In Part III, knowledge graph em- beddings are explored, analyzed, and compared. Multiple improvements to existing approaches are presented. In Part IV, numerous concrete matching systems which exploit general-purpose background knowledge are presented. Furthermore, exploitation strategies and resources are analyzed and compared. This dissertation closes with a perspective on real-world applications

    Verification of knowledge shared across design and manufacture using a foundation ontology

    Get PDF
    Seamless computer-based knowledge sharing between departments of a manufacturing enterprise is useful in preventing unnecessary design revisions. A lack of interoperability between independently developed knowledge bases, however, is a major impediment in the development of a seamless knowledge sharing system. Interoperability, being an ability to overcome semantic and syntactic differences during computer-based knowledge sharing can be enhanced through the use of ontologies. Ontologies in computer science terms are hierarchical structures of knowledge stored in a computer-based knowledge base. Ontologies have been accepted by all as an interoperable medium to provide a non-subjective way of storing and sharing knowledge across diverse domains. Some semantic and syntactic differences, however, still crop up when these ontological knowledge bases are developed independently. A case study in an aerospace components manufacturing company suggests that shape features of a component are perceived differently by the designing and manufacturing departments. These differences cause further misunderstanding and misinterpretation when computer-based knowledge sharing systems are used across the two domains. Foundation or core ontologies can be used to overcome these differences and to ensure a seamless sharing of knowledge. This is because these ontologies provide a common grounding for domain ontologies to be used by individual domains or department. This common grounding can be used by the mediation and knowledge verification systems to authenticate the meaning of knowledge understood across different domains. For this reason, this research proposes a knowledge verification framework for developing a system capable of verifying knowledge between those domain ontologies which are developed out of a common core or foundation ontology. This framework makes use of ontology logic to standardize the way concepts from a foundation and core-concepts ontology are used in domain ontologies and then by using the same principles the knowledge being shared is verified. The Knowledge Frame Language which is based on Common Logic is used for formalizing example ontologies. The ontology editor used for browsing and querying ontologies is the Integrated Ontology Development Environment (IODE) by Highfleet Inc. An ontological product modelling technique is also developed in this research, to test the proposed framework in the scenario of manufacturability analysis. The proposed framework is then validated through a Java API specially developed for this purpose. Real industrial examples experienced during the case study are used for validation

    Interoperability of Enterprise Software and Applications

    Get PDF

    Automating Conceptual Schema Reconciliation through a DL-based Schema Matcher

    No full text
    Data integration is an extremely complex process, especially one involving big data sources. For this reason, several high-level approaches have been proposed to master the inherent complexity of this process. These exploit either natural language processing, or ontologies to perform the data integration process at requirement level, or visual languages with icon operators capable of specifying reconciliation operations on data source conceptual schemas, like for example the visual language CoDIL (Conceptual Data Integration Language) that we have recently proposed. In order to provide automated support to suggest the CoDIL icon operators that can be applied, in this paper we propose a Description Logic-based schema matcher, which extends current at- tribute matching approaches to more complex constructs of conceptual schemas. We also sketch a possible evolution of the system architecture for the tool CoDIT (Conceptual Data Integration Tool), supporting the data integration process based on CoDIL
    corecore