11 research outputs found

    Reuse and enrichment for building an ontology for Obsessive-Compulsive Disorder

    Get PDF
    Building ontologies for mental diseases and disorders facilitates effective communication and knowledge sharing between healthcare providers, researchers, and patients. General medical and specialized ontolo- gies, such as the Mental Disease Ontology, are large repositories of concepts that require much effort to create and maintain. This paper proposes ontology reuse and automatic enrichment as means for design- ing and building an Obsessive-Compulsive Disorder (OCD) ontology. The methods are demonstrated by designing and building an ontology for the OCD. Ontology reuse is proposed through ontology alignment design patterns to allow for full, partial or nominal reuse. Enrichment is proposed through deep learning with a language representation model pre-trained on large-scale corpora of clinical notes and discharge summaries, as well as a text corpus from an OCD discussion forum. An ontology design pattern is proposed to encode the discovered related terms and their degree of similarity to the ontological concepts. The proposed approach allows for the seamless extension of the ontology by linking to other ontological resources or other learned vocabularies in the future. The OCD ontology is available online on Bioportal

    Towards Semi-automatic Generation of R2R Mappings

    Get PDF
    Translating data from linked data sources to the vocabulary that is expected by a linked data application requires a large number of mappings and can require a lot of structural transformations as well as complex property value transformations. The R2R mapping language is a language based on SPARQL for publishing expressive mappings on the web. However, the specification of R2R mappings is not an easy task. This paper therefore proposes the use of mapping patterns to semi-automatically generate R2R mappings between RDF vocabularies. In this paper, we first specify a mapping language with a high level of abstraction to transform data from a source ontology to a target ontology vocabulary. Second, we introduce the proposed mapping patterns. Finally, we present a method to semi-automatically generate R2R mappings using the mapping pattern

    Representing and aligning similar relations: parts and wholes in isiZulu vs English

    Get PDF
    Ontology-enabled medical information systems are used in Sub-Saharan Africa, which require localisation of Semantic Web technologies, such as ontology verbalisation, yet keeping a link with the English language-based systems. In realising this, we zoom in on the part-whole relations that are ubiquitous in medical ontologies, and the isiZulu language. The analysis of part-whole relations in isiZulu revealed both `underspecification'---therewith also challenging the transitivity claim---and three refinements cf. the list of common part-whole relations. This was first implemented for the monolingual scenario so that it generates structured natural language from an ontology in isiZulu. Two new natural language-independent correspondence patterns are proposed to solve non-1:1 object property alignments, which are subsequently used to align the part-whole taxonomies informed by the two languages

    Patterns for Heterogeneous TBox Mappings to Bridge Different Modelling Decisions

    Get PDF
    Correspondence patterns have been proposed as templates of commonly used alignments between heterogeneous elements in ontologies, although design tools are currently not equipped with handling these definition alignments nor pattern alignments. We aim to address this by, first, formalising the notion of design pattern; secondly, defining typical modelling choice patterns and their alignments; and finally, proposing algorithms for integrating automatic pattern detection into existing ontology design tools. This gave rise to six formalised pattern alignments and two efficient local search and pattern matching algorithms to propose possible pattern alignments to the modeller

    Semantic Model Alignment for Business Process Integration

    Get PDF
    Business process models describe an enterprise’s way of conducting business and in this form the basis for shaping the organization and engineering the appropriate supporting or even enabling IT. Thereby, a major task in working with models is their analysis and comparison for the purpose of aligning them. As models can differ semantically not only concerning the modeling languages used, but even more so in the way in which the natural language for labeling the model elements has been applied, the correct identification of the intended meaning of a legacy model is a non-trivial task that thus far has only been solved by humans. In particular at the time of reorganizations, the set-up of B2B-collaborations or mergers and acquisitions the semantic analysis of models of different origin that need to be consolidated is a manual effort that is not only tedious and error-prone but also time consuming and costly and often even repetitive. For facilitating automation of this task by means of IT, in this thesis the new method of Semantic Model Alignment is presented. Its application enables to extract and formalize the semantics of models for relating them based on the modeling language used and determining similarities based on the natural language used in model element labels. The resulting alignment supports model-based semantic business process integration. The research conducted is based on a design-science oriented approach and the method developed has been created together with all its enabling artifacts. These results have been published as the research progressed and are presented here in this thesis based on a selection of peer reviewed publications comprehensively describing the various aspects

    A Knowledge Graph Based Integration Approach for Industry 4.0

    Get PDF
    The fourth industrial revolution, Industry 4.0 (I40) aims at creating smart factories employing among others Cyber-Physical Systems (CPS), Internet of Things (IoT) and Artificial Intelligence (AI). Realizing smart factories according to the I40 vision requires intelligent human-to-machine and machine-to-machine communication. To achieve this communication, CPS along with their data need to be described and interoperability conflicts arising from various representations need to be resolved. For establishing interoperability, industry communities have created standards and standardization frameworks. Standards describe main properties of entities, systems, and processes, as well as interactions among them. Standardization frameworks classify, align, and integrate industrial standards according to their purposes and features. Despite being published by official international organizations, different standards may contain divergent definitions for similar entities. Further, when utilizing the same standard for the design of a CPS, different views can generate interoperability conflicts. Albeit expressive, standardization frameworks may represent divergent categorizations of the same standard to some extent, interoperability conflicts need to be resolved to support effective and efficient communication in smart factories. To achieve interoperability, data need to be semantically integrated and existing conflicts conciliated. This problem has been extensively studied in the literature. Obtained results can be applied to general integration problems. However, current approaches fail to consider specific interoperability conflicts that occur between entities in I40 scenarios. In this thesis, we tackle the problem of semantic data integration in I40 scenarios. A knowledge graphbased approach allowing for the integration of entities in I40 while considering their semantics is presented. To achieve this integration, there are challenges to be addressed on different conceptual levels. Firstly, defining mappings between standards and standardization frameworks; secondly, representing knowledge of entities in I40 scenarios described by standards; thirdly, integrating perspectives of CPS design while solving semantic heterogeneity issues; and finally, determining real industry applications for the presented approach. We first devise a knowledge-driven approach allowing for the integration of standards and standardization frameworks into an Industry 4.0 knowledge graph (I40KG). The standards ontology is used for representing the main properties of standards and standardization frameworks, as well as relationships among them. The I40KG permits to integrate standards and standardization frameworks while solving specific semantic heterogeneity conflicts in the domain. Further, we semantically describe standards in knowledge graphs. To this end, standards of core importance for I40 scenarios are considered, i.e., the Reference Architectural Model for I40 (RAMI4.0), AutomationML, and the Supply Chain Operation Reference Model (SCOR). In addition, different perspectives of entities describing CPS are integrated into the knowledge graphs. To evaluate the proposed methods, we rely on empirical evaluations as well as on the development of concrete use cases. The attained results provide evidence that a knowledge graph approach enables the effective data integration of entities in I40 scenarios while solving semantic interoperability conflicts, thus empowering the communication in smart factories

    Geração semi-automática de mapeamentos de vocabulários entre datasets da web de dados usando SPARQL

    Get PDF
    Atualmente, a web tem evoluído de um espaço global de documentos interligados ( Web de Documentos) para um espaço global de dados vinculados ( Web de Dados), de modo a que tanto os humanos como os agentes computacionais consigam compreender e extrair informações úteis desses dados. No entanto, para que seja possível possuir um dia uma Web de Dados, é necessário, em primeiro lugar, dar semântica aos dados. Neste sentido, emergiu uma nova abordagem, designada por Web Semântica, cujo principal objetivo é facilitar a interpretação e integração de dados na web. Na Web Semântica, utilizamos habitualmente as ontologias para descrever formalmente a semântica dos dados. No entanto, à medida que o número de ontologias vai aumentado, é bastante comum existir heterogeneidade entre elas, já que cada ontologia pode usar vocabulários diferentes para representar dados acerca de uma mesma área de conhecimento. Esta heterogeneidade impossibilita a recuperação de informações por parte dos agentes computacionais sem que haja intervenção humana. Para fazer face aos problemas relacionados com a heterogeneidade, é muito comum efetuar-se mapeamentos entre as ontologias. Existem diversas linguagens no mercado que permitem traduzir e mapear ontologias, dentro as quais destacamos a linguagem SPARQL Protocol and RDF Query Language (SPARQL 1.1) 1 e R2R 2 . Neste trabalho decidimos usar o SPARQL 1.1 como linguagem de mapeamento entre ontologias, pois é um padrão recomendado pelo World Wide Web Consortium (W3C) e amplamente utilizado pela comunidade. Como esta linguagem é complexa e necessita que o utilizador tenha experiência na definição e criação de mapeamentos, propomos uma ferramenta, chamada SPARQL Mapping with Assertions (SMA), que visa auxiliar os utilizadores no processo de geração de mapeamentos SPARQL 1.1 entre ontologias. A ferramenta SMA é constituída por quatro partes: (1) configuração inicial das ontologias: o utilizador indica quais as ontologias que deseja mapear, assim como a linguagem em que os ficheiros das mesmas estão escritos; (2) criação das Assertivas de Mapeamento (AMs): através da interface gráfica, o utilizador especifica quais os mapeamentos que deseja definir, incluindo possíveis transformações ou filtros que sejam necessários aplicar aos dados;(3) configuração para a geração de mapeamentos: o utilizador introduz o ficheiro com o Dataset da ontologia fonte e identifica a linguagem de serialização em que o mesmo está escrito. Além disso, também escolhe qual a linguagem de serialização que deseja aquando da geração de triplos; (4) geração de triplos através dos mapeamentos SPARQL 1.1: a partir dos pontos anteriores, a nossa ferramenta irá retornar um ficheiro com todos os resultados na linguagem de serialização escolhida. A nossa ferramenta permite ainda exportar todos os mapeamentos criados, quer seja através das linguagens formais (assertivas ou regras de mapeamentos) ou dos mapeamentos SPARQL 1.1
    corecore