740 research outputs found

    A Semantic Importing Approach to Reusing Knowledge from Multiple Autonomous Ontology Modules

    Get PDF
    We present the syntax and semantics of a modular ontology language \logic{SHOIQP} to accomplish knowledge integration from multiple ontologies and knowledge reuse from context-specific points of view. Specifically, a \logic{SHOIQP} ontology consists of multiple ontology modules (each of which can be viewed as a \logic{SHOIQ} ontology) and concept, role and nominal names can be shared by ``importing\u27\u27 relations among modules. The proposed language supports contextualized interpretation, i.e., interpretation from the point of view of a specific package. We establish the necessary and sufficient constraints on domain relations (i.e., the relations between individuals in different local domains) to preserve the satisfiability of concept formulae, monotonicity of inference, and transitive reuse of knowledge

    An Ontological Framework for Knowledge Management in Systems Engineering Processes

    Get PDF
    ISBN:978-953-7619-94-7, pp.149-168Systems Engineering (SE) processes comprise highly creative and knowledge-intensive tasksthat involve extensive problem-solving and decision-making activities amonginterdisciplinary teams (Meinadier, 2002). SE projects involve the definition of multipleartifacts that present different formalization degrees, such as requirements specification,system architecture, and hardware/ software components. Transitions between the projectphases stem from decision making processes supported both by generally available domainand design knowledge.We argue that Knowledge about engineering processes constitutes one of the most valuableassets for SE organizations. Most often, this knowledge is only known implicitly, relyingheavily on the personal experience background of system engineers. To fully exploit thisintellectual capital, it must be made explicit and shared among project teams. Consistentand comprehensive knowledge management methods need to be applied to capture andintegrate the individual knowledge items emerging in the course of a system engineeringproject

    A method to generate a modular ifcOWL ontology

    Get PDF
    Building Information Modeling (BIM) and Semantic Web technologies are becoming more and more popular in the Architecture Engineering Construction (AEC) and Facilities Management (FM) industry to support information management, information exchange and data interoperability. One of the key integration gateways between BIM and Semantic Web is represented by the ifcOWL ontology, i.e. the Web Ontology Language (OWL) version of the IFC standard, being one of reference technical standard for AEC/FM. Previous studies have shown how a recommended ifcOWL ontology can be automatically generated by converting the IFC standard from the official EXPRESS schema. However, the resulting ifcOWL is a large monolithic ontology that presents serious limitations for real industrial applications in terms of usability and performance (i.e. querying and reasoning). Possible enhancements to reduce the complexity and the data size consist in (1) modularization of ifcOWL making it easier to use subsets of the entire ontology, and (2) rethinking the contents and structure of an ontology for AEC/FM to better fit in the semantic web scope and make its usage more efficient. The second approach can be enabled by the first one, since it would make it easier to replace some of the ifcOWL modules with new optimized ontologies for the AEC-FM industry. This paper focuses on the first approach presenting a method to automatically generate a modular ifcOWL ontology. The method aims at minimizing the dependencies between modules to better exploit the modularization. The results are compared with simpler and more straight-forward solutions

    Open biomedical pluralism : formalising knowledge about breast cancer phenotypes

    Get PDF
    We demonstrate a heterogeneity of representation types for breast cancer phenotypes and stress that the characterisation of a tumour phenotype often includes parameters that go beyond the representation of a corresponding empirically observed tumour, thus reflecting significant functional features of the phenotypes as well as epistemic interests that drive the modes of representation. Accordingly, the represented features of cancer phenotypes function as epistemic vehicles aiding various classifications, explanations, and predictions. In order to clarify how the plurality of epistemic motivations can be integrated on a formal level, we give a distinction between six categories of human agents as individuals and groups focused around particular epistemic interests. We analyse the corresponding impact of these groups and individuals on representation types, mapping and reasoning scenarios. Respecting the plurality of representations, related formalisms, expressivities and aims, as they are found across diverse scientific communities, we argue for a pluralistic ontology integration. Moreover, we discuss and illustrate to what extent such a pluralistic integration is supported by the distributed ontology language DOL, a meta-language for heterogeneous ontology representation that is currently under standardisation as ISO WD 17347 within the OntoIOp (Ontology Integration and Interoperability) activity of ISO/TC 37/SC 3. We particularly illustrate how DOL supports representations of parthood on various levels of logical expressivity, mapping of terms, merging of ontologies, as well as non-monotonic extensions based on circumscription allowing a transparent formal modelling of the normal/abnormal distinction in phenotypes

    A foundation for ontology modularisation

    Get PDF
    There has been great interest in realising the Semantic Web. Ontologies are used to define Semantic Web applications. Ontologies have grown to be large and complex to the point where it causes cognitive overload for humans, in understanding and maintaining, and for machines, in processing and reasoning. Furthermore, building ontologies from scratch is time-consuming and not always necessary. Prospective ontology developers could consider using existing ontologies that are of good quality. However, an entire large ontology is not always required for a particular application, but a subset of the knowledge may be relevant. Modularity deals with simplifying an ontology for a particular context or by structure into smaller ontologies, thereby preserving the contextual knowledge. There are a number of benefits in modularising an ontology including simplified maintenance and machine processing, as well as collaborative efforts whereby work can be shared among experts. Modularity has been successfully applied to a number of different ontologies to improve usability and assist with complexity. However, problems exist for modularity that have not been satisfactorily addressed. Currently, modularity tools generate large modules that do not exclusively represent the context. Partitioning tools, which ought to generate disjoint modules, sometimes create overlapping modules. These problems arise from a number of issues: different module types have not been clearly characterised, it is unclear what the properties of a 'good' module are, and it is unclear which evaluation criteria applies to specific module types. In order to successfully solve the problem, a number of theoretical aspects have to be investigated. It is important to determine which ontology module types are the most widely-used and to characterise each such type by distinguishing properties. One must identify properties that a 'good' or 'usable' module meets. In this thesis, we investigate these problems with modularity systematically. We begin by identifying dimensions for modularity to define its foundation: use-case, technique, type, property, and evaluation metric. Each dimension is populated with sub-dimensions as fine-grained values. The dimensions are used to create an empirically-based framework for modularity by classifying a set of ontologies with them, which results in dependencies among the dimensions. The formal framework can be used to guide the user in modularising an ontology and as a starting point in the modularisation process. To solve the problem with module quality, new and existing metrics were implemented into a novel tool TOMM, and an experimental evaluation with a set of modules was performed resulting in dependencies between the metrics and module types. These dependencies can be used to determine whether a module is of good quality. For the issue with existing modularity techniques, we created five new algorithms to improve the current tools and techniques and experimentally evaluate them. The algorithms of the tool, NOMSA, performs as well as other tools for most performance criteria. For NOMSA's generated modules, two of its algorithms' generated modules are good quality when compared to the expected dependencies of the framework. The remaining three algorithms' modules correspond to some of the expected values for the metrics for the ontology set in question. The success of solving the problems with modularity resulted in a formal foundation for modularity which comprises: an exhaustive set of modularity dimensions with dependencies between them, a framework for guiding the modularisation process and annotating module, a way to measure the quality of modules using the novel TOMM tool which has new and existing evaluation metrics, the SUGOI tool for module management that has been investigated for module interchangeability, and an implementation of new algorithms to fill in the gaps of insufficient tools and techniques
    • …
    corecore