428 research outputs found

    UK utility data integration: overcoming schematic heterogeneity

    Get PDF
    In this paper we discuss syntactic, semantic and schematic issues which inhibit the integration of utility data in the UK. We then focus on the techniques employed within the VISTA project to overcome schematic heterogeneity. A Global Schema based architecture is employed. Although automated approaches to Global Schema definition were attempted the heterogeneities of the sector were too great. A manual approach to Global Schema definition was employed. The techniques used to define and subsequently map source utility data models to this schema are discussed in detail. In order to ensure a coherent integrated model, sub and cross domain validation issues are then highlighted. Finally the proposed framework and data flow for schematic integration is introduced

    AGENT-BASED NEGOTIATION PLATFORM IN COLLABORATIVE NETWORKED ENVIRONMENT

    Get PDF
    This paper proposes an agent-based platform to model and support parallel and concurrent negotiations among organizations acting in the same industrial market. The underlying complexity is to model the dynamic environment where multi-attribute and multi-participant negotiations are racing over a set of heterogeneous resources. The metaphor Interaction Abstract Machines (IAMs) is used to model the parallelism and the non-deterministic aspects of the negotiation processes that occur in Collaborative Networked Environment

    SEEING THE UNSEEN: DELIVERING INTEGRATED UNDERGROUND UTILITY DATA IN THE UK

    Get PDF
    In earlier work we proposed a framework to integrate heterogeneous geospatial utility data in the UK. This paper provides an update on the techniques used to resolve semantic and schematic heterogeneities in the UK utility domain. Approaches for data delivery are discussed, including descriptions of three pilot projects and domain specific visualization issues are considered. A number of practical considerations are discussed that will impact on how any implementation architecture is derived from the integration framework. Considerations of stability, security, currency, operational impact and response time can reveal a number of conflicting constraints. The impacts of these constraints are discussed in respect of either a virtual or materialised delivery system. 1

    Framework for the Analysis of the Adaptability, Extensibility, and Scalability of Semantic Information Integration and the Context Mediation Approach

    Get PDF
    Technological advances such as Service Oriented Architecture (SOA) have increased the feasibility and importance of effectively integrating information from an ever widening number of systems within and across enterprises. A key difficulty of achieving this goal comes from the pervasive heterogeneity in all levels of information systems. A robust solution to this problem needs to be adaptable, extensible, and scalable. In this paper, we identify the deficiencies of traditional semantic integration approaches. The COntext INterchange (COIN) approach overcomes these deficiencies by declaratively representing data semantics and using a mediator to create the necessary conversion programs from a small number of conversion rules. The capabilities of COIN is demonstrated using an example with 150 data sources, where COIN can automatically generate the over 22,000 conversion programs needed to enable semantic interoperability using only six parametizable conversion rules. This paper presents a framework for evaluating adaptability, extensibility, and scalability of semantic integration approaches. The application of the framework is demonstrated with a systematic evaluation of COIN and other commonly practiced approaches.This work has been supported, in part, by MITRE Corp., the MIT-MUST project, the Singapore-MIT Alliance, and Suruga Bank

    Semantic recovery of traceability links between system artifacts

    Get PDF
    This paper introduces a mechanism to recover traceability links between the requirements and logical models in the context of critical systems development. Currently, lifecycle processes are covered by a good number of tools that are used to generate different types of artifacts. One of the cornerstone capabilities in the development of critical systems lies in the possibility of automatically recovery traceability links between system artifacts generated in different lifecycle stages. To do so, it is necessary to establish to what extent two or more of these work products are similar, dependent or should be explicitly linked together. However, the different types of artifacts and their internal representation depict a major challenge to unify how system artifacts are represented and, then, linked together. That is why, in this work, a concept-based representation is introduced to provide a semantic and unified description of any system artifact. Furthermore, a traceability function is defined and implemented to exploit this new semantic representation and to support the recovery of traceability links between different types of system artifacts. In order to evaluate the traceability function, a case study in the railway domain is conducted to compare the precision and recall of recovery traceability links between text-based requirements and logical model elements. As the main outcome of this work, the use of a concept-based paradigm to represent that system artifacts are demonstrated as a building block to automatically recover traceability links within the development lifecycle of critical systems.The research leading to these results has received funding from the H2020 ECSEL Joint Undertaking (JU) under Grant Agreement No. 826452 \Arrowhead Tools for Engineering of Digitalisation Solutions" and from speci¯c national programs and/or funding authorities

    Provenance : from long-term preservation to query federation and grid reasoning

    Get PDF

    Context Interchange as a Scalable Solution to Interoperating Amongst Heterogeneous Dynamic Services

    Get PDF
    Many online services access a large number of autonomous data sources and at the same time need to meet different user requirements. It is essential for these services to achieve semantic interoperability among these information exchange entities. In the presence of an increasing number of proprietary business processes, heterogeneous data standards, and diverse user requirements, it is critical that the services are implemented using adaptable, extensible, and scalable technology. The COntext INterchange (COIN) approach, inspired by similar goals of the Semantic Web, provides a robust solution. In this paper, we describe how COIN can be used to implement dynamic online services where semantic differences are reconciled on the fly. We show that COIN is flexible and scalable by comparing it with several conventional approaches. With a given ontology, the number of conversions in COIN is quadratic to the semantic aspect that has the largest number of distinctions. These semantic aspects are modeled as modifiers in a conceptual ontology; in most cases the number of conversions is linear with the number of modifiers, which is significantly smaller than traditional hard-wiring middleware approach where the number of conversion programs is quadratic to the number of sources and data receivers. In the example scenario in the paper, the COIN approach needs only 5 conversions to be defined while traditional approaches require 20,000 to 100 million. COIN achieves this scalability by automatically composing all the comprehensive conversions from a small number of declaratively defined sub-conversions.Singapore-MIT Alliance (SMA

    Semantic Information Integration in the Large: Adaptability, Extensibility, and Scalability of the Context Mediation Approach

    Get PDF
    There is pressing need for effectively integrating information from an ever increasing number of available sources both on the web and in other existing systems. A key difficulty of achieving this goal comes from the pervasive heterogeneities in all levels of information systems. Existing and emerging technologies, such as the Web, ODBC, XML, and Web Services, provide essential capabilities in resolving heterogeneities in the hardware and software platforms, but they do not address the semantic heterogeneity of the data itself. A robust solution to this problem needs to be adaptable, extensible, and scalable. In this paper, we identify the deficiencies of traditional approaches that address this problem using hand-coded programs or require complete data standardization. The COntext INterchange (COIN) approach overcomes these deficiencies by declaratively representing data semantics and using a mediator to create the necessary conversion programs using a small number of conversion rules. The capabilities of COIN is demonstrated using an intelligence information integration example consisting of 150 data sources, where COIN can automatically generate the over 22,000 conversion programs needed to enable semantic integration using only six parametizable conversion rules. This paper makes a unique contribution by providing a systematic evaluation of COIN and other commonly practiced approaches

    SemLinker: automating big data integration for casual users

    Get PDF
    A data integration approach combines data from different sources and builds a unified view for the users. Big data integration inherently is a complex task, and the existing approaches are either potentially limited or invariably rely on manual inputs and interposition from experts or skilled users. SemLinker, an ontology-based data integration system, is part of a metadata management framework for personal data lake (PDL), a personal store-everything architecture. PDL is for casual and unskilled users, therefore SemLinker adopts an automated data integration workflow to minimize manual input requirements. To support the flat architecture of a lake, SemLinker builds and maintains a schema metadata level without involving any physical transformation of data during integration, preserving the data in their native formats while, at the same time, allowing them to be queried and analyzed. Scalability, heterogeneity, and schema evolution are big data integration challenges that are addressed by SemLinker. Large and real-world datasets of substantial heterogeneities are used in evaluating SemLinker. The results demonstrate and confirm the integration efficiency and robustness of SemLinker, especially regarding its capability in the automatic handling of data heterogeneities and schema evolutions

    An incremental method for meaning elicitation of a domain ontology

    Get PDF
    Internet has opened the access to an overwhelming amount of data, requiring the development of new applications to automatically recognize, process and manage informationavailable in web sites or web-based applications. The standardSemantic Web architecture exploits ontologies to give a shared(and known) meaning to each web source elements.In this context, we developed MELIS (Meaning Elicitation and Lexical Integration System). MELIS couples the lexical annotation module of the MOMIS system with some components from CTXMATCH2.0, a tool for eliciting meaning from severaltypes of schemas and match them. MELIS uses the MOMIS WNEditor and CTXMATCH2.0 to support two main tasks in theMOMIS ontology generation methodology: the source annotationprocess, i.e. the operation of associating an element of a lexicaldatabase to each source element, and the extraction of lexicalrelationships among elements of different data sources
    corecore