409 research outputs found

    The modular structure of an ontology: Atomic decomposition

    Get PDF
    Extracting a subset of a given ontology that captures all the ontology’s knowledge about a specified set of terms is a well-understood task. This task can be based, for instance, on locality-based modules. However, a single module does not allow us to understand neither topicality, connectedness, structure, or superfluous parts of an ontology, nor agreement between actual and intended modeling. The strong logical properties of locality-based modules suggest that the family of all such modules of an ontology can support comprehension of the ontology as a whole. However, extracting that family is not feasible, since the number of localitybased modules of an ontology can be exponential w.r.t. its size. In this paper we report on a new approach that enables us to efficiently extract a polynomial representation of the family of all locality-based modules of an ontology. We also describe the fundamental algorithm to pursue this task, and report on experiments carried out and results obtained.

    A Lightweight Framework for Universal Fragment Composition

    Get PDF
    Domain-specific languages (DSLs) are useful tools for coping with complexity in software development. DSLs provide developers with appropriate constructs for specifying and solving the problems they are faced with. While the exact definition of DSLs can vary, they can roughly be divided into two categories: embedded and non-embedded. Embedded DSLs (E-DSLs) are integrated into general-purpose host languages (e.g. Java), while non-embedded DSLs (NE-DSLs) are standalone languages with their own tooling (e.g. compilers or interpreters). NE-DSLs can for example be found on the Semantic Web where they are used for querying or describing shared domain models (ontologies). A common theme with DSLs is naturally their support of focused expressive power. However, in many cases they do not support non–domain-specific component-oriented constructs that can be useful for developers. Such constructs are standard in general-purpose languages (procedures, methods, packages, libraries etc.). While E-DSLs have access to such constructs via their host languages, NE-DSLs do not have this opportunity. Instead, to support such notions, each of these languages have to be extended and their tooling updated accordingly. Such modifications can be costly and must be done individually for each language. A solution method for one language cannot easily be reused for another. There currently exist no appropriate technology for tackling this problem in a general manner. Apart from identifying the need for a general approach to address this issue, we extend existing composition technology to provide a language-inclusive solution. We build upon fragment-based composition techniques and make them applicable to arbitrary (context-free) languages. We call this process for the composition techniques’ universalization. The techniques are called fragment-based since their view of components— reusable software units with interfaces—are pieces of source code that conform to an underlying (context-free) language grammar. The universalization process is grammar-driven: given a base language grammar and a description of the compositional needs wrt. the composition techniques, an adapted grammar is created that corresponds to the specified needs. The result is thus an adapted grammar that forms the foundation for allowing to define and compose the desired fragments. We further build upon this grammar-driven universalization approach to allow developers to define the non–domain-specific component-oriented constructs that are needed for NE-DSLs. Developers are able to define both what those constructs should be, and how they are to be interpreted (via composition). Thus, developers can effectively define language extensions and their semantics. This solution is presented in a framework that can be reused for different languages, even if their notion of ‘components’ differ. To demonstrate the approach and show its applicability, we apply it to two Semantic Web related NE-DSLs that are in need of component-oriented constructs. We introduce modules to the rule-based Web query language Xcerpt and role models to the Web Ontology Language OWL

    Characterizing Modular Ontologies

    Get PDF
    International audienceSince large monolithic ontologies are di cult to handle and reuse ontology modularization has attracted increasing attention. Several approaches and tools have been developed to support ontology modularization. Despite these e orts, a lack of knowledge about characteristics of modularly organized ontologies prevents further development. This work aims at characterizing modular ontologies. Therefore, we analyze existing modular ontologies by applying selected metrics from software engineering in order to identify recurring structures, i.e. patterns in modularly organized ontologies. The contribution is a set of four patterns which characterize modularly organized ontologies

    Towards Common Ground in SME: An Ontology of Method Descriptors

    Get PDF
    Part 3: Method Engineering FoundationsInternational audienceThe Method Engineering (ME) community is a prolific research domain where competing Situational Method Engineering (SME) approaches have been defined and used for composing, adapting or/and configuring a method into modular constructs according to their own modularization vision. This diversity shows the richness of the ME domain but implies some drawback like unnecessary confusion for non ME expert, lack of standard & interoperability, lack of implementation tool. However, researchers are agreed that a common ground in SME is a hot matter of discussion. Assuming that the differences between SME approaches are purposeful, we propose to reach a semantic common ground on what types of core concepts constitute a method descriptor. To achieve it, an ontology-based approach is applied in SME to design an ontology of method descriptors as a domain ontology. The semantics of the six most popular SME approaches modular constructs are defined according to this ontology in order to show its usage and its relevance. Finally, usage scenarios have been sketched to show that the ontology can be the start up phase for reducing the ME drawbacks mentioned above

    Tracing the Biological Roots of Knowledge

    Get PDF
    The essay is a critical review of three possible approaches in the theory of knowledge while tracing the biological roots of knowledge: empiricist, rationalist and developmentalist approaches. Piaget's genetic epistemology, a developmentalist approach, is one of the first comprehensive treatments on the question of tracing biological roots of knowledge. This developmental approach is currently opposed, without questioning the biological roots of knowledge, by the more popular rationalist approach, championed by Chomsky. Developmental approaches are generally coherent with cybernetic models, of which the theory of autopoiesis proposed by Maturana and Varela made a significant theoretical move in proposing an intimate connection between metabolism and knowledge. Modular architecture is currently considered more or less an undisputable model for both biology as well as cognitive science. By suggesting that modulation of modules is possible by motor coordination, a proposal is made to account for higher forms of conscious cognition within the four distinguishable layers of the human mind. Towards the end, the problem of life and cognition is discussed in the context of the evolution of complex cognitive systems, suggesting the unique access of phylogeny during the ontogeny of human beings as a very special case, and how the problem cannot be dealt with independent of the evolution of coding systems in nature

    Go with the Flow - Design of Cloud Logistics Service Blueprints

    Get PDF
    By adopting principles of cloud computing to the \ logistics domain the paradigm of Cloud Logistics is derived. It \ appears to be a promising paradigm in order to evolve logistics \ into being more flexible and collaborative. Yet, appropriate \ concepts that enable the cloud logistics paradigm are missing. \ In the paper, existing body of literature is reviewed and a \ definition and a framework of cloud logistics is given. Further, \ service blueprinting is combined with domain engineering and \ general morphological analysis in order to create a suitable \ method for designing cloud oriented service blueprints. Those \ are focusing on domain-specific flows and transformations \ enabling cloud oriented business collaboration. The method \ is applied to the logistics domain and a cloud logistics service \ blueprint is designed. Finally, the concept is evaluated with \ real use cases from logistics service providers

    A foundation for ontology modularisation

    Get PDF
    There has been great interest in realising the Semantic Web. Ontologies are used to define Semantic Web applications. Ontologies have grown to be large and complex to the point where it causes cognitive overload for humans, in understanding and maintaining, and for machines, in processing and reasoning. Furthermore, building ontologies from scratch is time-consuming and not always necessary. Prospective ontology developers could consider using existing ontologies that are of good quality. However, an entire large ontology is not always required for a particular application, but a subset of the knowledge may be relevant. Modularity deals with simplifying an ontology for a particular context or by structure into smaller ontologies, thereby preserving the contextual knowledge. There are a number of benefits in modularising an ontology including simplified maintenance and machine processing, as well as collaborative efforts whereby work can be shared among experts. Modularity has been successfully applied to a number of different ontologies to improve usability and assist with complexity. However, problems exist for modularity that have not been satisfactorily addressed. Currently, modularity tools generate large modules that do not exclusively represent the context. Partitioning tools, which ought to generate disjoint modules, sometimes create overlapping modules. These problems arise from a number of issues: different module types have not been clearly characterised, it is unclear what the properties of a 'good' module are, and it is unclear which evaluation criteria applies to specific module types. In order to successfully solve the problem, a number of theoretical aspects have to be investigated. It is important to determine which ontology module types are the most widely-used and to characterise each such type by distinguishing properties. One must identify properties that a 'good' or 'usable' module meets. In this thesis, we investigate these problems with modularity systematically. We begin by identifying dimensions for modularity to define its foundation: use-case, technique, type, property, and evaluation metric. Each dimension is populated with sub-dimensions as fine-grained values. The dimensions are used to create an empirically-based framework for modularity by classifying a set of ontologies with them, which results in dependencies among the dimensions. The formal framework can be used to guide the user in modularising an ontology and as a starting point in the modularisation process. To solve the problem with module quality, new and existing metrics were implemented into a novel tool TOMM, and an experimental evaluation with a set of modules was performed resulting in dependencies between the metrics and module types. These dependencies can be used to determine whether a module is of good quality. For the issue with existing modularity techniques, we created five new algorithms to improve the current tools and techniques and experimentally evaluate them. The algorithms of the tool, NOMSA, performs as well as other tools for most performance criteria. For NOMSA's generated modules, two of its algorithms' generated modules are good quality when compared to the expected dependencies of the framework. The remaining three algorithms' modules correspond to some of the expected values for the metrics for the ontology set in question. The success of solving the problems with modularity resulted in a formal foundation for modularity which comprises: an exhaustive set of modularity dimensions with dependencies between them, a framework for guiding the modularisation process and annotating module, a way to measure the quality of modules using the novel TOMM tool which has new and existing evaluation metrics, the SUGOI tool for module management that has been investigated for module interchangeability, and an implementation of new algorithms to fill in the gaps of insufficient tools and techniques

    Pattern-based design applied to cultural heritage knowledge graphs

    Full text link
    Ontology Design Patterns (ODPs) have become an established and recognised practice for guaranteeing good quality ontology engineering. There are several ODP repositories where ODPs are shared as well as ontology design methodologies recommending their reuse. Performing rigorous testing is recommended as well for supporting ontology maintenance and validating the resulting resource against its motivating requirements. Nevertheless, it is less than straightforward to find guidelines on how to apply such methodologies for developing domain-specific knowledge graphs. ArCo is the knowledge graph of Italian Cultural Heritage and has been developed by using eXtreme Design (XD), an ODP- and test-driven methodology. During its development, XD has been adapted to the need of the CH domain e.g. gathering requirements from an open, diverse community of consumers, a new ODP has been defined and many have been specialised to address specific CH requirements. This paper presents ArCo and describes how to apply XD to the development and validation of a CH knowledge graph, also detailing the (intellectual) process implemented for matching the encountered modelling problems to ODPs. Relevant contributions also include a novel web tool for supporting unit-testing of knowledge graphs, a rigorous evaluation of ArCo, and a discussion of methodological lessons learned during ArCo development
    • 

    corecore