64 research outputs found

    Foundational Ontologies meet Ontology Matching: A Survey

    Get PDF
    Ontology matching is a research area aimed at finding ways to make different ontologies interoperable. Solutions to the problem have been proposed from different disciplines, including databases, natural language processing, and machine learning. The role of foundational ontologies for ontology matching is an important one. It is multifaceted and with room for development. This paper presents an overview of the different tasks involved in ontology matching that consider foundational ontologies. We discuss the strengths and weaknesses of existing proposals and highlight the challenges to be addressed in the future

    An empirically-based framework for ontology modularization

    Get PDF
    Modularity is being increasingly used as an approach to solve for the information overload problem in ontologies. It eases cognitive complexity for humans, and computational complexity for machines. The current literature for modularity focuses mainly on techniques, tools, and on evaluation metrics. However, ontology developers still face difficulty in selecting the correct technique for specific applications and the current tools for modularity are not sufficient. These issues stem from a lack of theory about the modularisation process. To solve this problem, several researchers propose a framework for modularity, but alas, this has not been realised, up until now. In this article, we survey the existing literature to identify and populate dimensions of modules, experimentally evaluate and characterise 189 existing modules, and create a framework for modularity based on these results. The framework guides the ontology developer throughout the modularisation process. We evaluate the framework with a use-case for the Symptom ontology

    Toward a framework for ontology modularity

    Get PDF
    Dividing up data or information into smaller components---modules---is a well-know approach to a range of problems, such as scalability and model comprehension. The use of modules in ontologies at the knowledge layer is receiving increased attention, and a plethora of approaches, algorithms, and tools exist, which, however, yield only very limited success. This is mainly because wrong combinations of techniques are being used. To solve this issue, we examine the modules' use-cases, types, techniques, and properties from the literature. This is used to create a framework for ontology modularity, such that a user with a certain use case will know the type of modules needed, and therewith then also the appropriate technique to realise it and what properties the resultant modules will have. This framework is then evaluated with three case studies, begin the QUDT, FMA, and OpenGalen ontologies

    Functional Ontologies and Their Application to Hydrologic Modeling: Development of an Integrated Semantic and Procedural Knowledge Model and Reasoning Engine

    Get PDF
    This dissertation represents the research and development of new concepts and techniques for modeling the knowledge about the many concepts we as hydrologists must understand such that we can execute models that operate in terms of conceptual abstractions and have those abstractions translate to the data, tools, and models we use every day. This hydrologic knowledge includes conceptual (i.e. semantic) knowledge, such as the hydrologic cycle concepts and relationships, as well as functional (i.e. procedural) knowledge, such as how to compute the area of a watershed polygon, average basin slope or topographic wetness index. This dissertation is presented as three papers and a reference manual for the software created. Because hydrologic knowledge includes both semantic aspects as well as procedural aspects, we have developed, in the first paper, a new form of reasoning engine and knowledge base that extends the general-purpose analysis and problem-solving capability of reasoning engines by incorporating procedural knowledge, represented as computer source code, into the knowledge base. The reasoning engine is able to compile the code and then, if need be, execute the procedural code as part of a query. The potential advantage to this approach is that it simplifies the description of procedural knowledge in a form that can be readily utilized by the reasoning engine to answer a query. Further, since the form of representation of the procedural knowledge is source code, the procedural knowledge has the full capabilities of the underlying language. We use the term functional ontology to refer to the new semantic and procedural knowledge models. The first paper applies the new knowledge model to describing and analyzing polygons. The second and third papers address the application of the new functional ontology reasoning engine and knowledge model to hydrologic applications. The second paper models concepts and procedures, including running external software, related to watershed delineation. The third paper models a project scenario that includes integrating several models. A key advance demonstrated in this paper is the use of functional ontologies to apply metamodeling concepts in a manner that both abstracts and fully utilizes computational models and data sets as part of the project modeling process

    Proceedings of the International Workshop on Vocabularies, Ontologies and Rules for The Enterprise (VORTE 2005)

    Get PDF

    Semantic Model Alignment for Business Process Integration

    Get PDF
    Business process models describe an enterprise’s way of conducting business and in this form the basis for shaping the organization and engineering the appropriate supporting or even enabling IT. Thereby, a major task in working with models is their analysis and comparison for the purpose of aligning them. As models can differ semantically not only concerning the modeling languages used, but even more so in the way in which the natural language for labeling the model elements has been applied, the correct identification of the intended meaning of a legacy model is a non-trivial task that thus far has only been solved by humans. In particular at the time of reorganizations, the set-up of B2B-collaborations or mergers and acquisitions the semantic analysis of models of different origin that need to be consolidated is a manual effort that is not only tedious and error-prone but also time consuming and costly and often even repetitive. For facilitating automation of this task by means of IT, in this thesis the new method of Semantic Model Alignment is presented. Its application enables to extract and formalize the semantics of models for relating them based on the modeling language used and determining similarities based on the natural language used in model element labels. The resulting alignment supports model-based semantic business process integration. The research conducted is based on a design-science oriented approach and the method developed has been created together with all its enabling artifacts. These results have been published as the research progressed and are presented here in this thesis based on a selection of peer reviewed publications comprehensively describing the various aspects

    Los modelos de semántica de marcos para la representación del conocimiento jurídico en el Derecho Comparado: el caso de la responsabilidad del Estado

    Get PDF
    En aquest article s'analitza en profunditat i es realitza una proposta de representació del coneixement jurídic subjacent al concepte de responsabilitat de l'Estat des d'una perspectiva multilingüe i juscomparativa. Per a això, es proposa d'augmentar la informació dels marcs semàntics (a partir d'ara, marcs) a través dels semantic types en el sistema FrameNet, amb el doble objectiu de servir com a representació interlingua del coneixement jurídic i de formalitzar les causes del desajust lèxic i conceptual dels sistemes jurídics. S'estudia el principi de responsabilitat de l'Estat en els models espanyol, anglès, francès i italià i es demostra com una descripció més detallada del coneixement jurídic, a través de la vinculació dels frame elements (a partir d'ara designats amb l'acrònim FE) dels marcs amb els tipus semàntics [±sentient], possibilita no només la utilització d'aquests com a representació interlingua, sinó, a més, procura explicar les divergències/convergències dels diferents plantejaments del concepte de responsabilitat de l'Estat, ancorats en contextos socioculturals de diferent tradició. La present proposta evidencia els avantatges de l'esmentada formalització com a model explicatiu del procés dinàmic de vaig donar/convergència en la jurisprudència del Tribunal de Justícia de la Unió Europea (a partir d'ara designat amb la sigla TJUE).This article offers an in-depth analysis, and proposes a representation of the legal knowledge underlying the concept of State responsibility from a multilingual and comparative law perspective. To this end, it recommends increasing information on frame semantics (hereinafter, frames) through the semantic types in the FrameNet system, with the double purpose of acting as an interlingual representation of legal knowledge and formalising the causes for lexical and conceptual imbalances in legal systems. The article studies the principle of State responsibility in the Spanish, English, French and Italian models and shows how a more detailed description of legal knowledge through the linking of the frame elements (hereinafter designed by the acronym FE) of the frames with the semantic types [±sentient], makes it feasible not just to use these as an interlingual representation, but also to try to explain the divergences/convergences of the various approaches to the concept of the State responsibility that are rooted in sociocultural contexts of a different tradition. This proposal demonstrates the advantages of this formalisation as a model to explain the dynamic process of divergence/convergence in the case law of the Court of Justice of the European Union (referred to hereinafter by the acronym CJEU).En este artículo se analiza en profundidad y se realiza una propuesta de representación del conocimiento jurídico subyacente al concepto de responsabilidad del Estado desde una perspectiva multilingüe y juscomparativa. Para ello, se propone aumentar la información de los marcos semánticos (a partir de ahora, marcos) a través de los semantic types en el sistema FrameNet, con el doble objetivo de servir como representación interlingüe del conocimiento jurídico y de formalizar las causas del desajuste léxico y conceptual de los sistemas jurídicos. Se estudia el principio de responsabilidad del Estado en los modelos español, inglés, francés e italiano y se demuestra cómo una descripción más detallada del conocimiento jurídico, a través de la vinculación de los frame elements (a partir de ahora designados con el acrónimo FE) de los marcos con los tipos semánticos [±sentient], posibilita no solo la utilización de estos como representación interlingüe, sino, además, procura explicar las divergencias/convergencias de los distintos planteamientos del concepto de responsabilidad del Estado, anclados en contextos socioculturales de diferente tradición. La presente propuesta evidencia las ventajas de dicha formalización como modelo explicativo del proceso dinámico de di/convergencia en la jurisprudencia del Tribunal de Justicia de la Unión Europea (a partir de ahora designado con la sigla TJUE)

    Provenance-aware knowledge representation: A survey of data models and contextualized knowledge graphs

    Get PDF
    Expressing machine-interpretable statements in the form of subject-predicate-object triples is a well-established practice for capturing semantics of structured data. However, the standard used for representing these triples, RDF, inherently lacks the mechanism to attach provenance data, which would be crucial to make automatically generated and/or processed data authoritative. This paper is a critical review of data models, annotation frameworks, knowledge organization systems, serialization syntaxes, and algebras that enable provenance-aware RDF statements. The various approaches are assessed in terms of standard compliance, formal semantics, tuple type, vocabulary term usage, blank nodes, provenance granularity, and scalability. This can be used to advance existing solutions and help implementers to select the most suitable approach (or a combination of approaches) for their applications. Moreover, the analysis of the mechanisms and their limitations highlighted in this paper can serve as the basis for novel approaches in RDF-powered applications with increasing provenance needs

    A foundation for ontology modularisation

    Get PDF
    There has been great interest in realising the Semantic Web. Ontologies are used to define Semantic Web applications. Ontologies have grown to be large and complex to the point where it causes cognitive overload for humans, in understanding and maintaining, and for machines, in processing and reasoning. Furthermore, building ontologies from scratch is time-consuming and not always necessary. Prospective ontology developers could consider using existing ontologies that are of good quality. However, an entire large ontology is not always required for a particular application, but a subset of the knowledge may be relevant. Modularity deals with simplifying an ontology for a particular context or by structure into smaller ontologies, thereby preserving the contextual knowledge. There are a number of benefits in modularising an ontology including simplified maintenance and machine processing, as well as collaborative efforts whereby work can be shared among experts. Modularity has been successfully applied to a number of different ontologies to improve usability and assist with complexity. However, problems exist for modularity that have not been satisfactorily addressed. Currently, modularity tools generate large modules that do not exclusively represent the context. Partitioning tools, which ought to generate disjoint modules, sometimes create overlapping modules. These problems arise from a number of issues: different module types have not been clearly characterised, it is unclear what the properties of a 'good' module are, and it is unclear which evaluation criteria applies to specific module types. In order to successfully solve the problem, a number of theoretical aspects have to be investigated. It is important to determine which ontology module types are the most widely-used and to characterise each such type by distinguishing properties. One must identify properties that a 'good' or 'usable' module meets. In this thesis, we investigate these problems with modularity systematically. We begin by identifying dimensions for modularity to define its foundation: use-case, technique, type, property, and evaluation metric. Each dimension is populated with sub-dimensions as fine-grained values. The dimensions are used to create an empirically-based framework for modularity by classifying a set of ontologies with them, which results in dependencies among the dimensions. The formal framework can be used to guide the user in modularising an ontology and as a starting point in the modularisation process. To solve the problem with module quality, new and existing metrics were implemented into a novel tool TOMM, and an experimental evaluation with a set of modules was performed resulting in dependencies between the metrics and module types. These dependencies can be used to determine whether a module is of good quality. For the issue with existing modularity techniques, we created five new algorithms to improve the current tools and techniques and experimentally evaluate them. The algorithms of the tool, NOMSA, performs as well as other tools for most performance criteria. For NOMSA's generated modules, two of its algorithms' generated modules are good quality when compared to the expected dependencies of the framework. The remaining three algorithms' modules correspond to some of the expected values for the metrics for the ontology set in question. The success of solving the problems with modularity resulted in a formal foundation for modularity which comprises: an exhaustive set of modularity dimensions with dependencies between them, a framework for guiding the modularisation process and annotating module, a way to measure the quality of modules using the novel TOMM tool which has new and existing evaluation metrics, the SUGOI tool for module management that has been investigated for module interchangeability, and an implementation of new algorithms to fill in the gaps of insufficient tools and techniques
    corecore