21 research outputs found

    Exploring Reasoning with the DMOP Ontology

    Get PDF
    We describe the Data Mining OPtimization Ontology (DMOP), which was developed to support informed decision-making at various choice points of the knowledge discovery (KD) process. DMOP contains in-depth descriptions of DM tasks, data, algorithms, hypotheses, and workflows. Its development raised a number of non-trivial modeling problems, the solution to which demanded maximal exploitation of OWL 2 representational potential. The choices made led to v5.4 of the DMOP ontology. We report some evaluations on processing DMOP with a standard reasoner by considering different DMOP features

    A foundation for ontology modularisation

    Get PDF
    There has been great interest in realising the Semantic Web. Ontologies are used to define Semantic Web applications. Ontologies have grown to be large and complex to the point where it causes cognitive overload for humans, in understanding and maintaining, and for machines, in processing and reasoning. Furthermore, building ontologies from scratch is time-consuming and not always necessary. Prospective ontology developers could consider using existing ontologies that are of good quality. However, an entire large ontology is not always required for a particular application, but a subset of the knowledge may be relevant. Modularity deals with simplifying an ontology for a particular context or by structure into smaller ontologies, thereby preserving the contextual knowledge. There are a number of benefits in modularising an ontology including simplified maintenance and machine processing, as well as collaborative efforts whereby work can be shared among experts. Modularity has been successfully applied to a number of different ontologies to improve usability and assist with complexity. However, problems exist for modularity that have not been satisfactorily addressed. Currently, modularity tools generate large modules that do not exclusively represent the context. Partitioning tools, which ought to generate disjoint modules, sometimes create overlapping modules. These problems arise from a number of issues: different module types have not been clearly characterised, it is unclear what the properties of a 'good' module are, and it is unclear which evaluation criteria applies to specific module types. In order to successfully solve the problem, a number of theoretical aspects have to be investigated. It is important to determine which ontology module types are the most widely-used and to characterise each such type by distinguishing properties. One must identify properties that a 'good' or 'usable' module meets. In this thesis, we investigate these problems with modularity systematically. We begin by identifying dimensions for modularity to define its foundation: use-case, technique, type, property, and evaluation metric. Each dimension is populated with sub-dimensions as fine-grained values. The dimensions are used to create an empirically-based framework for modularity by classifying a set of ontologies with them, which results in dependencies among the dimensions. The formal framework can be used to guide the user in modularising an ontology and as a starting point in the modularisation process. To solve the problem with module quality, new and existing metrics were implemented into a novel tool TOMM, and an experimental evaluation with a set of modules was performed resulting in dependencies between the metrics and module types. These dependencies can be used to determine whether a module is of good quality. For the issue with existing modularity techniques, we created five new algorithms to improve the current tools and techniques and experimentally evaluate them. The algorithms of the tool, NOMSA, performs as well as other tools for most performance criteria. For NOMSA's generated modules, two of its algorithms' generated modules are good quality when compared to the expected dependencies of the framework. The remaining three algorithms' modules correspond to some of the expected values for the metrics for the ontology set in question. The success of solving the problems with modularity resulted in a formal foundation for modularity which comprises: an exhaustive set of modularity dimensions with dependencies between them, a framework for guiding the modularisation process and annotating module, a way to measure the quality of modules using the novel TOMM tool which has new and existing evaluation metrics, the SUGOI tool for module management that has been investigated for module interchangeability, and an implementation of new algorithms to fill in the gaps of insufficient tools and techniques

    Automatically changing modules in modular ontology development and management

    Get PDF
    Modularity has been proposed as a solution to deal with large ontologies. This requires, various module management tasks, such as swapping an outdated module for a new one or a computationally costly one for a leaner fragment. No mechanism exists to exchange an arbitrary module automatically. To solve this manual task, we modify the SUGOI algorithm into SUGOI-Gen; with SUGOI-Gen, one can swap any module within a modular system, implemented it, and wrapped a GUI around it. We carried out an experimental evaluation with six ontologies covering three different use-cases to determine whether arbitrary interchangeability is practically doable, and to what extent such changes affect the quality of the module and automated reasoning over it. The results are positive, with the success rate varying between 22-100% depending on the number of mappings between the source and target module. The evaluation also revealed that the interchangeability does indeed have an impact on a module’s metrics. Regarding reasoning, when comparing an original ontology to one where a module has been swapped, the processing time is greatly improved for all except one of the swapped modules in the set

    A Semantic Data Grid for Satellite Mission Quality Analysis

    Full text link
    The combination of Semantic Web and Grid technologies and architectures cases the development of applications that share heterogeneous resource,, (data and computing elements) that belong to several organisations. The Aerospace domain has an extensive and heterogeneous network of facilities and institutions, with a strong need to share both data and computational resources for complex processing tasks. One such task is monitoring and data analysis for Satellite Missions. This paper presents a Semantic Data Grid for satellite missions, where flexibility, scalability, interoperability, extensibility and efficient development have been considered the key issues to be addressed

    An empirically-based framework for ontology modularization

    Get PDF
    Modularity is being increasingly used as an approach to solve for the information overload problem in ontologies. It eases cognitive complexity for humans, and computational complexity for machines. The current literature for modularity focuses mainly on techniques, tools, and on evaluation metrics. However, ontology developers still face difficulty in selecting the correct technique for specific applications and the current tools for modularity are not sufficient. These issues stem from a lack of theory about the modularisation process. To solve this problem, several researchers propose a framework for modularity, but alas, this has not been realised, up until now. In this article, we survey the existing literature to identify and populate dimensions of modules, experimentally evaluate and characterise 189 existing modules, and create a framework for modularity based on these results. The framework guides the ontology developer throughout the modularisation process. We evaluate the framework with a use-case for the Symptom ontology

    Toward a framework for ontology modularity

    Get PDF
    Dividing up data or information into smaller components---modules---is a well-know approach to a range of problems, such as scalability and model comprehension. The use of modules in ontologies at the knowledge layer is receiving increased attention, and a plethora of approaches, algorithms, and tools exist, which, however, yield only very limited success. This is mainly because wrong combinations of techniques are being used. To solve this issue, we examine the modules' use-cases, types, techniques, and properties from the literature. This is used to create a framework for ontology modularity, such that a user with a certain use case will know the type of modules needed, and therewith then also the appropriate technique to realise it and what properties the resultant modules will have. This framework is then evaluated with three case studies, begin the QUDT, FMA, and OpenGalen ontologies

    Describing and Organizing Semantic Web and Machine Learning Systems in the SWeMLS-KG

    Full text link
    In line with the general trend in artificial intelligence research to create intelligent systems that combine learning and symbolic components, a new sub-area has emerged that focuses on combining machine learning (ML) components with techniques developed by the Semantic Web (SW) community - Semantic Web Machine Learning (SWeML for short). Due to its rapid growth and impact on several communities in the last two decades, there is a need to better understand the space of these SWeML Systems, their characteristics, and trends. Yet, surveys that adopt principled and unbiased approaches are missing. To fill this gap, we performed a systematic study and analyzed nearly 500 papers published in the last decade in this area, where we focused on evaluating architectural, and application-specific features. Our analysis identified a rapidly growing interest in SWeML Systems, with a high impact on several application domains and tasks. Catalysts for this rapid growth are the increased application of deep learning and knowledge graph technologies. By leveraging the in-depth understanding of this area acquired through this study, a further key contribution of this paper is a classification system for SWeML Systems which we publish as ontology.Comment: Preprint of a paper in the resource track of the 20th Extended Semantic Web Conference (ESWC'23

    Evidence-based Languages for Conceptual Data Modelling Profiles

    Get PDF
    To improve database system quality as well as runtime use of conceptual models, many logic-based reconstructions of conceptual data modelling languages have been proposed in a myriad of logics. They each cover their features to a greater or lesser extent and are typically motivated from a logic viewpoint. This raises questions such as what would be an evidence-based common core and what is the optimal language profile for a conceptual modelling language family. Based on a common metamodel of UML Class Diagrams (v2.4.1), ER/EER, and ORM/2's static elements, a set of 101 conceptual models, and availing of computational complexity insights from Description Logics, we specify these profiles. There is no known DL language that matches exactly the features of those profiles and the common core is small (in the tractable ALNI\mathcal{ALNI}). Although hardly any inconsistencies can be derived with the profiles, it is promising for scalable runtime use of conceptual data models
    corecore