134,304 research outputs found

    The Form of Organization for Small Business

    Get PDF
    Matching and integrating ontologies has been a desirable technique in areas such as data fusion, knowledge integration, the Semantic Web and the development of advanced services in distributed system. Unfortunately, the heterogeneities of ontologies cause big obstacles in the development of this technique. This licentiate thesis describes an approach to tackle the problem of ontology integration using description logics and production rules, both on a syntactic level and on a semantic level. Concepts in ontologies are matched and integrated to generate ontology intersections. Context is extracted and rules for handling heterogeneous ontology reasoning with contexts are developed. Ontologies are integrated by two processes. The first integration is to generate an ontology intersection from two OWL ontologies. The result is an ontology intersection, which is an independent ontology containing non-contradictory assertions based on the original ontologies. The second integration is carried out by rules that extract context, such as ontology content and ontology description data, e.g. time and ontology creator. The integration is designed for conceptual ontology integration. The information of instances isn't considered, neither in the integrating process nor in the integrating results. An ontology reasoner is used in the integration process for non-violation check of two OWL ontologies and a rule engine for handling conflicts according to production rules. The ontology reasoner checks the satisfiability of concepts with the help of anchors, i.e. synonyms and string-identical entities; production rules are applied to integrate ontologies, with the constraint that the original ontologies should not be violated. The second integration process is carried out with production rules with context data of the ontologies. Ontology reasoning, in a repository, is conducted within the boundary of each ontology. Nonetheless, with context rules, reasoning is carried out across ontologies. The contents of an ontology provide context for its defined entities and are extracted to provide context with the help of an ontology reasoner. Metadata of ontologies are criteria that are useful for describing ontologies. Rules using context, also called context rules, are developed and in-built in the repository. New rules can also be added. The scientific contribution of the thesis is the suggested approach applying semantic based techniques to provide a complementary method for ontology matching and integrating semantically. With the illustration of the ontology integration process and the context rules and a few manually integrated ontology results, the approach shows the potential to help to develop advanced knowledge-based services.QC 20130201</p

    Ontology-based patterns for the integration of business processes and enterprise application architectures

    Get PDF
    Increasingly, enterprises are using Service-Oriented Architecture (SOA) as an approach to Enterprise Application Integration (EAI). SOA has the potential to bridge the gap between business and technology and to improve the reuse of existing applications and the interoperability with new ones. In addition to service architecture descriptions, architecture abstractions like patterns and styles capture design knowledge and allow the reuse of successfully applied designs, thus improving the quality of software. Knowledge gained from integration projects can be captured to build a repository of semantically enriched, experience-based solutions. Business patterns identify the interaction and structure between users, business processes, and data. Specific integration and composition patterns at a more technical level address enterprise application integration and capture reliable architecture solutions. We use an ontology-based approach to capture architecture and process patterns. Ontology techniques for pattern definition, extension and composition are developed and their applicability in business process-driven application integration is demonstrated

    Past, present and future of information and knowledge sharing in the construction industry: Towards semantic service-based e-construction

    Get PDF
    The paper reviews product data technology initiatives in the construction sector and provides a synthesis of related ICT industry needs. A comparison between (a) the data centric characteristics of Product Data Technology (PDT) and (b) ontology with a focus on semantics, is given, highlighting the pros and cons of each approach. The paper advocates the migration from data-centric application integration to ontology-based business process support, and proposes inter-enterprise collaboration architectures and frameworks based on semantic services, underpinned by ontology-based knowledge structures. The paper discusses the main reasons behind the low industry take up of product data technology, and proposes a preliminary roadmap for the wide industry diffusion of the proposed approach. In this respect, the paper stresses the value of adopting alliance-based modes of operation

    An Ontology-Based Data Integration System for Data and Multimedia Sources

    Get PDF
    Data integration is the problem of combining data residing at distributed heterogeneous sources, including multimedia sources, and providing the user with a unified view of these data. Ontology based Data Integration involves the use of ontology(s) to effectively combine data and information from multiple heterogeneous sources [16]. Ontologies, with respect to the integration of data sources, can be used for the identification and association of semantically correspond- ing information concepts, i.e. for the definition of semantic mappings among concepts of the information sources. MOMIS is a Data Integration System which performs in-formation extraction and integration from both structured and semi- structured data sources [6]. In [5] MOMIS was extended to manage “traditional” and “multimedia” data sources at the same time. STASIS is a comprehensive application suite which allows enterprises to simplify the mapping process between data schemas based on semantics [1]. Moreover, in STASIS, a general framework to perform Ontology-driven Semantic Mapping has been pro-posed [7]. This paper describes the early effort to combine the MOMIS and the STASIS frameworks in order to obtain an effective approach for Ontology-Based Data Integration for data and multimedia sources

    Contextual Semantic Integration For Ontologies

    Get PDF
    Information integration in organisations has been hindered by differences in the software applications used and by the structure and semantic differences of the different data sources (de Bruijn, 2003). This is a common problem in the area of Enterprise Application Integration (EAI) where numerous ah-hoc programs have typically been created to perform the integration process. More recently ontologies have been introduced into this area as a possible solution to these problems, but most of the current approaches to ontology integration only address platform, syntactic and structural differences and do not address the semantic differences between the data sources (de Bruijn, 2003). For ontology semantic integration the underlying meaning of each element is needed. An approach based on introducing the contextualisation of the terms used in an ontology is proposed. This approach is called Contextual Semantic Integration for Ontologies

    RDF-Based Data Integration for Workflow Systems

    Get PDF
    To meet the requirements of interoperability, the enactment of workflow systems for processes should tackle the problem of data integration for effective data sharing and exchange. This paper aims at flexibly describing workflow entities and relationships by innovative ontology engineering, which are emerging in process-centred environments, supported by Resource Description Framework (RDF) based languages and tools. Our novel framework takes into consideration to position the ontology level in the data integration dimension. Having taken a more realistic approach towards interoperability, we present basic constructs of a workflow specific ontology, with a suite of classes and properties selectively created. In particular, we demonstrate an example description of Event Condition Action (ECA) rules by extensions of RDF. As an inter-lingua, the proposed vocabulary and semantics can be mapped onto other process description languages as well as the simple XML-based data representation of our earlier workflow prototype

    KA-SB: from data integration to large scale reasoning

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The analysis of information in the biological domain is usually focused on the analysis of data from single on-line data sources. Unfortunately, studying a biological process requires having access to disperse, heterogeneous, autonomous data sources. In this context, an analysis of the information is not possible without the integration of such data.</p> <p>Methods</p> <p>KA-SB is a querying and analysis system for final users based on combining a data integration solution with a reasoner. Thus, the tool has been created with a process divided into two steps: 1) KOMF, the Khaos Ontology-based Mediator Framework, is used to retrieve information from heterogeneous and distributed databases; 2) the integrated information is crystallized in a (persistent and high performance) reasoner (DBOWL). This information could be further analyzed later (by means of querying and reasoning).</p> <p>Results</p> <p>In this paper we present a novel system that combines the use of a mediation system with the reasoning capabilities of a large scale reasoner to provide a way of finding new knowledge and of analyzing the integrated information from different databases, which is retrieved as a set of ontology instances. This tool uses a graphical query interface to build user queries easily, which shows a graphical representation of the ontology and allows users o build queries by clicking on the ontology concepts.</p> <p>Conclusion</p> <p>These kinds of systems (based on KOMF) will provide users with very large amounts of information (interpreted as ontology instances once retrieved), which cannot be managed using traditional main memory-based reasoners. We propose a process for creating persistent and scalable knowledgebases from sets of OWL instances obtained by integrating heterogeneous data sources with KOMF. This process has been applied to develop a demo tool <url>http://khaos.uma.es/KA-SB</url>, which uses the BioPax Level 3 ontology as the integration schema, and integrates UNIPROT, KEGG, CHEBI, BRENDA and SABIORK databases.</p

    Uncertainty in Automated Ontology Matching: Lessons Learned from an Empirical Experimentation

    Full text link
    Data integration is considered a classic research field and a pressing need within the information science community. Ontologies play a critical role in such a process by providing well-consolidated support to link and semantically integrate datasets via interoperability. This paper approaches data integration from an application perspective, looking at techniques based on ontology matching. An ontology-based process may only be considered adequate by assuming manual matching of different sources of information. However, since the approach becomes unrealistic once the system scales up, automation of the matching process becomes a compelling need. Therefore, we have conducted experiments on actual data with the support of existing tools for automatic ontology matching from the scientific community. Even considering a relatively simple case study (i.e., the spatio-temporal alignment of global indicators), outcomes clearly show significant uncertainty resulting from errors and inaccuracies along the automated matching process. More concretely, this paper aims to test on real-world data a bottom-up knowledge-building approach, discuss the lessons learned from the experimental results of the case study, and draw conclusions about uncertainty and uncertainty management in an automated ontology matching process. While the most common evaluation metrics clearly demonstrate the unreliability of fully automated matching solutions, properly designed semi-supervised approaches seem to be mature for a more generalized application

    Integrated management of hierarchical levels: towards a CAPE tool

    Get PDF
    The integration of decision-making procedures usually assigned to different hierarchical production systems requires the use of complex mathematical models and high computational efforts, in addition to the need of an extensive management of data and knowledge within the production systems. This work addresses this integration problem and proposes a comprehensive solution approach, as well as guidelines for Computer Aided Process Engineering (CAPE) tools managing the corresponding cyberinfrastructure. This study presents a methodology based on a domain ontology which is used as the connector between the introduced data, the different available formulations developed to solve the decision-making problem, and the necessary information to build the finally required problem instance. The methodology has demonstrated its capability to help exploiting different available decision-making problem formulations in complex cases, leading to new applications and/or extensions of these available formulations in a robust and flexible way.Peer ReviewedPostprint (author's final draft
    corecore