8 research outputs found

    NOMSA: Automated modularisation for abstraction modules

    Get PDF
    Large and complex ontologies lead to difficulty in usage by humans and causes processing problems with software agents. Modularity has been proposed to address this problem. Current methods and tools can be used to create only some of the existing types of required modules. To augment options for modularisation, we present novel methods to create ve types of abstraction modules: axiom abstraction, vocabulary abstraction, high-level abstraction, weighted abstraction, and feature expressiveness. They have been implemented in the novel tool NOMSA for automated modularisation, which also offers a GUI

    ROMULUS: a Repository of Ontologies for MULtiple USes populated with foundational ontologies

    Get PDF
    A foundational ontology contributes to ontology-driven conceptual data modelling and is used to solve interoperability issues among domain ontologies. Multiple foundational ontologies have been developed in recent years, and most of them are available in several versions. This has re-introduced the interoperability problem, increased the need for a coordinated and structured comparison and elucidation of modelling decisions, and raised the requirement for software infrastructure to address this. We present here a basic step in that direction with the Repository of Ontologies for MULtiple USes, ROMULUS, which is the first online library of machine-processable, modularised, aligned, and logic-based merged foundational ontologies. In addition to the typical features of a model repository, it has a foundational ontology recommender covering features of six foundational ontologies, tailor-made modules for easier reuse, and a catalogue of mappable and non-mappable elements among the BFO, GFO and DOLCE foundational ontologies

    An empirically-based framework for ontology modularization

    Get PDF
    Modularity is being increasingly used as an approach to solve for the information overload problem in ontologies. It eases cognitive complexity for humans, and computational complexity for machines. The current literature for modularity focuses mainly on techniques, tools, and on evaluation metrics. However, ontology developers still face difficulty in selecting the correct technique for specific applications and the current tools for modularity are not sufficient. These issues stem from a lack of theory about the modularisation process. To solve this problem, several researchers propose a framework for modularity, but alas, this has not been realised, up until now. In this article, we survey the existing literature to identify and populate dimensions of modules, experimentally evaluate and characterise 189 existing modules, and create a framework for modularity based on these results. The framework guides the ontology developer throughout the modularisation process. We evaluate the framework with a use-case for the Symptom ontology

    Toward a framework for ontology modularity

    Get PDF
    Dividing up data or information into smaller components---modules---is a well-know approach to a range of problems, such as scalability and model comprehension. The use of modules in ontologies at the knowledge layer is receiving increased attention, and a plethora of approaches, algorithms, and tools exist, which, however, yield only very limited success. This is mainly because wrong combinations of techniques are being used. To solve this issue, we examine the modules' use-cases, types, techniques, and properties from the literature. This is used to create a framework for ontology modularity, such that a user with a certain use case will know the type of modules needed, and therewith then also the appropriate technique to realise it and what properties the resultant modules will have. This framework is then evaluated with three case studies, begin the QUDT, FMA, and OpenGalen ontologies

    Experimentally motivated transformations for intermodel links between conceptual models

    Get PDF
    Complex system development and information integration at the conceptual layer raises the requirement to be able to declare intermodel assertions between entities in models that may, or may not, be represented in the same modelling language. This is compounded by the fact that semantically equivalent notions may have been represented with a different element, such as an attribute or class. We first investigate such occurrences in six ICOM projects and 40 models with 33 schema matchings. While equivalence and subsumption are in the overwhelming majority, this extends mainly to different types of attributes, and therewith requiring non-1:1 mappings. We present a solution that bridges these semantic gaps. To facilitate implementation, the mappings and transformations are declared in ATL. This avails of a common, and logic-based, metamodel to aid verification of the links. This is currently being implemented as proof-of-concept in the ICOM tool

    Exploring Reasoning with the DMOP Ontology

    Get PDF
    We describe the Data Mining OPtimization Ontology (DMOP), which was developed to support informed decision-making at various choice points of the knowledge discovery (KD) process. DMOP contains in-depth descriptions of DM tasks, data, algorithms, hypotheses, and workflows. Its development raised a number of non-trivial modeling problems, the solution to which demanded maximal exploitation of OWL 2 representational potential. The choices made led to v5.4 of the DMOP ontology. We report some evaluations on processing DMOP with a standard reasoner by considering different DMOP features
    corecore