22 research outputs found

    An ontological analysis of the electrocardiogram - DOI: 10.3395/reciis.v3i1.242en

    No full text
    Bioinformatics has been a fertile field for the application of the discipline of formal ontology. The principled representation of biomedical entities has increasingly supported biological research, with direct benefits ranging from the reformulation of medical terminologies to the introduction of new perspectives for enhanced models of Electronic Health Records (EHR). This paper introduces an application-independent ontological analysis of the electrocardiogram (ECG) grounded in the Unified Foundational Ontology. With the objective of investigating the phenomena underlying this cardiological exam, we deal with the sub-domains of human heart electrophysiology and anatomy. We then outline an ECG Ontology built upon the OBO Relation Ontology. In addition, the domain ontology sketched here takes inspiration both in the Foundational Model of Anatomy and in the Ontology of Functions proposed under the auspices of the General Formal Ontology (GFO) research program

    Uma análise ontológica do eletrocardiograma

    No full text
    Bioinformatics has been a fertile field for the application of the discipline of formal ontology. The principled representation of biomedical entities has increasingly supported biological research, with direct benefits ranging from the reformulation of medical terminologies to the introduction of new perspectives for enhanced models of Electronic Health Records (EHR). This paper introduces an application-independent ontological analysis of the electrocardiogram (ECG) grounded in the Unified Foundational Ontology. With the objective of investigating the phenomena underlying this cardiological exam, we deal with the sub-domains of human heart electrophysiology and anatomy. We then outline an ECG Ontology built upon the OBO Relation Ontology. In addition, the domain ontology sketched here takes inspiration both in the Foundational Model of Anatomy and in the Ontology of Functions proposed under the auspices of the General Formal Ontology (GFO) research program.A bioinformática tem sido um campo fértil para aplicação da disciplina de ontologia formal. A representação com princípios de entidades biomédicas tem alimentado progressivamente a pesquisa biológica, com benefícios diretos que vão da reformulação das terminologias médicas à introdução de novas perspectivas para modelos aprimorados de Registros Eletrônicos de Saúde (RES). Este artigo apresenta uma análise ontológica do eletrocardiograma (ECG) independente de aplicação e fundamentada na Ontologia Fundamental Unificada. Com o objetivo de investigar o fenômeno subjacente a esse exame cardiológico, trabalhamos com os subdomínios da eletrofisiologia e anatomia do coração humano. Então delineamos uma Ontologia do ECG construída sobre a Ontologia Relacional OBO (Open Biomedical Ontology). Além disso, a ontologia de domínio esboçada aqui se inspira tanto no Modelo Fundamental de Anatomia como na Ontologia de Funções proposta sob o amparo do programa de pesquisa em Ontologia Geral Formal (OGF

    An ontological analysis of the electrocardiogram

    No full text
    Made available in DSpace on 2017-01-30T10:57:29Z (GMT). No. of bitstreams: 2 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) 5.pdf: 1021964 bytes, checksum: b92d444b45e3e364ab48cf1eedf61d24 (MD5) Previous issue date: 2009Universidade Federal do Espírito Santo. Departamento de Ciência da Computação. Vitória, ES, Brasil.Universidade Federal do Espírito Santo. Departamento de Ciência da Computação. Vitória, ES, Brasil.Universidade Federal do Espírito Santo. Departamento de Ciência da Computação. Vitória, ES, Brasil.A bioinformática tem sido um campo fértil para aplicação da disciplina de ontologia formal. A representação com princípios de entidades biomédicas tem alimentado progressivamente a pesquisa biológica, com benefícios diretos que vão da reformulação das terminologias médicas à introdução de novas perspectivas para modelos aprimorados de Registros Eletrônicos de Saúde (RES). Este artigo apresenta uma análise ontológica do eletrocardiograma (ECG) independente de aplicação e fundamentada na Ontologia Fundamental Unificada. Com o objetivo de investigar o fenômeno subjacente a esse exame cardiológico, trabalhamos com os subdomínios da eletrofisiologia e anatomia do coração humano. Então delineamos uma Ontologia do ECG construída sobre a Ontologia Relacional OBO (Open Biomedical Ontology). Além disso, a ontologia de domínio esboçada aqui se inspira tanto no Modelo Fundamental de Anatomia como na Ontologia de Funções proposta sob o amparo do programa de pesquisa em Ontologia Geral Formal (OGF).Bioinformatics has been a fertile field for the application of the discipline of formal ontology. The principled representation of biomedical entities has increasingly supported biological research, with direct benefits ranging from the reformulation of medical terminologies to the introduction of new perspectives for enhanced models of Electronic Health Records (EHR). This paper introduces an application-independent ontological analysis of the electrocardiogram (ECG) grounded in the Unified Foundational Ontology. With the objective of investigating the phenomena underlying this cardiological exam, we deal with the sub-domains of human heart electrophysiology and anatomy. We then outline an ECG Ontology built upon the OBO Relation Ontology. In addition, the domain ontology sketched here takes inspiration both in the Foundational Model of Anatomy and in the Ontology of Functions proposed under the auspices of the General Formal Ontology (GFO) research program

    Filtering clinical guideline interactions with pre-conditions:A case study on diabetes guideline

    No full text
    Clinical Guidelines are meant to support healthcare providers to offer a better service via evidence-based recommendations that apply according to certain circumstances given a certain disease or condition. However, the high number of recommendations in a single guideline makes it humanly impossible to verify for all possible interactions. The goal of this work is twofold: (i) to analyse pros and cons of formalising a real guideline using the TMR model and then (ii) to infer interaction among (some of) the recommendations from the the Scottish Guideline on Diabetes. To this end we extend the TMR Model to formalize the pre-conditions that define in which circumstances a recommendations may apply and we implemented the reasoning in SWI-Prolog. The results show that (i) properly formalising the diabetes guideline is a cross-disciplinary task that requires both the formalisation know-how and the medical background; and (ii) indeed the diabetes guideline presents conflicting recommendations which can be automatically detected provided the suitable modeling and background knowledge. It is reasonable to conclude that these conclusions hold for other guidelines too

    Contextual entity disambiguation in domains with weak identity criteria: Disambiguating golden age amsterdamers

    Get PDF
    Entity disambiguation is a widely investigated topic, and many matching algorithms have been proposed. However, this task has not yet been satisfactorily addressed when the domain of interest provides poor or incomplete data with little discriminating power. In these cases, the use of content fields such as name and date is not enough and the simple use of relations with other entities is not of much help when these related entities also need disambiguation before they can be used. Therefore, we propose an approach for the disambiguation of clustered resources using context (related entities that are also clustered) as evidence for reconciling matched entities. We test the proposed method on datasets of historical records from Amsterdam in the 17th century for which context is available, and we compare the results of the proposed approach to a gold standard generated by three experts, which we make available online. The results show that the proposed approach manages to meaningfully use context for isolating identity sub-clusters with higher quality by eliminating potentially false positive matches

    Toward a Core Conceptual Model for (Im)material Cultural Heritage in the Golden Agents project.: Storyfying Data.: SEMANTiCS 2017 workshop proceedings: EVENTS September 11-14, 2017, Amsterdam

    No full text
    This paper reports on the initial idea of a core conceptual model for the Golden Agents Project, which aims to integrate several het- erogeneous datasets about cultural heritage of the Dutch Golden Age. We hypothesize that the combination of event and storytelling modeling would provide us a common infrastructure to represent and retrieve some core information regardless to its specific nature: painting, book, notary act or theatre performance. The proposed model was developed based on (i) several discussions conducted with humanities experts and (ii) foundational ontology for ground- ing the modeling decisions. It is assessed through a case study about Vermeer, the painting ‘Girl with a Pearl Earring’ and a novel written about the production of the painting. We conclude that the model satisfactorily addresses the case study, and we discuss some next steps to further assess and extend the model, as well as implementing and testing it in practice

    Unlocking the Archives. A pipeline for scanning, transcribing and modelling entities of archival documents into Linked Open data.(short paper)

    No full text
    In the project Golden Agents: Creative Industries and the Making of the Dutch Golden Age, heterogeneous resources on the production of the creative industries in the Dutch Golden Age from heritage institutions (e.g. Rijksmuseum, KB, RKD) are brought together as linked data. In this project, the collection of the notarial deeds in the Amsterdam City Archives will provide data on the consumption of cultural goods by the inhabitants of all layers of society in Amsterdam during the Dutch Golden Age. In the project Alle Amsterdams Akten [All Amsterdam Deeds] handwritten notarial deeds are indexed on the level of inventories, documents, person names, and geolocations outside Amsterdam. At the same time, the full text of these documents is being made searchable by using the advanced HTR in the project Crowd Leert Computer Lezen [Crowd Teaches the Computer how to Read]. This data is brought together and analyzed for the occurence of goods in probate and estate inventories. Once extracted and identified, almost all types of these goods can be linked to thesauri such as the Getty’s AAT and reconciled with textual/linguistic references to an item in an external (authored) dataset, such as the STCN, ICONCLASS, and those of the RKD. The model that is used to express this is under development and will be compliant with major and widely used data models in the Galleries, Libraries, Archives, and Museums (GLAM) world, such as the CIDOC-CRM. In this paper, the full pipeline from archives to annotations is represented that comprehends the successive stages of scanning, indexing, transcribing, correcting, aggregating, and modelling the entities of archival documents into RDF as Linked Open Data. It provides the creation of transparent datasets that can be replicated, evaluated and used for quantitative analyses in digital humanities research
    corecore