3,792 research outputs found

    An Extended Semantic Interoperability Model for Distributed Electronic Health Record Based on Fuzzy Ontology Semantics

    Get PDF
    Semantic interoperability of distributed electronic health record (EHR) systems is a crucial problem for querying EHR and machine learning projects. The main contribution of this paper is to propose and implement a fuzzy ontology-based semantic interoperability framework for distributed EHR systems. First, a separate standard ontology is created for each input source. Second, a unified ontology is created that merges the previously created ontologies. However, this crisp ontology is not able to answer vague or uncertain queries. We thirdly extend the integrated crisp ontology into a fuzzy ontology by using a standard methodology and fuzzy logic to handle this limitation. The used dataset includes identified data of 100 patients. The resulting fuzzy ontology includes 27 class, 58 properties, 43 fuzzy data types, 451 instances, 8376 axioms, 5232 logical axioms, 1216 declarative axioms, 113 annotation axioms, and 3204 data property assertions. The resulting ontology is tested using real data from the MIMIC-III intensive care unit dataset and real archetypes from openEHR. This fuzzy ontology-based system helps physicians accurately query any required data about patients from distributed locations using near-natural language queries. Domain specialists validated the accuracy and correctness of the obtained resultsThis work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (NRF-2021R1A2B5B02002599)S

    Report on the EHCR (Deliverable 26.1)

    Get PDF
    The challenge of richly interpreting electronic health information, in order to populate EHR instances with suitable terms, to provide decision support in the care of individuals, to identify suitable patients for teaching or clinical trials recruitment, and to mine populations of records for public health or to discover new medical knowledge, all require that the heterogeneous clinical entry instances within EHR repositories can be systematically analysed and interpreted. Achieving this requires the combination and co-operation of many different health informatics tools and technologies, underpinned by shared representations of clinical concepts and inferencing formalisms. Much of this work is at the level of R&D, and is well represented across the Semantic Mining consortium. The challenge of WP26 is to build up a vision of the ways in which these historically independent threads of health informatics research can collaborate, and uncover the research challenges that are needed in order to deliver good demonstrations of semantically indexed and richly analysable EHRs. The partners have begun WP26 by acquiring a better knowledge of each other’s areas of endeavour, and are beginning to steer their research interests towards future areas of collaboration

    IRS II: a framework and infrastructure for semantic web services

    Get PDF
    In this paper we describe IRS–II (Internet Reasoning Service) a framework and implemented infrastructure, whose main goal is to support the publication, location, composition and execution of heterogeneous web services, augmented with semantic descriptions of their functionalities. IRS–II has three main classes of features which distinguish it from other work on semantic web services. Firstly, it supports one-click publishing of standalone software: IRS–II automatically creates the appropriate wrappers, given pointers to the standalone code. Secondly, it explicitly distinguishes between tasks (what to do) and methods (how to achieve tasks) and as a result supports capability-driven service invocation; flexible mappings between services and problem specifications; and dynamic, knowledge-based service selection. Finally, IRS–II services are web service compatible – standard web services can be trivially published through the IRS–II and any IRS–II service automatically appears as a standard web service to other web service infrastructures. In the paper we illustrate the main functionalities of IRS–II through a scenario involving a distributed application in the healthcare domain

    Semantic Information on Electronic Medical Records (EMRs) through Ontologies

    Get PDF
    This work shows the development of ontology in the domain of Electronic Medical Records (EMRs). The ontology supports vocabulary and semantic information to patients. The ontology implemented begins with the, the exploration of semantic web applications, ontology design ,analysis and the use of ontological engineering in order information indexing and retrieval from and to electronic medical records. This ontology is one of other services to incorporate on current telemedicine systems.Sociedad Argentina de Informática e Investigación Operativ

    Semantic Information on Electronic Medical Records (EMRs) through Ontologies

    Get PDF
    This work shows the development of ontology in the domain of Electronic Medical Records (EMRs). The ontology supports vocabulary and semantic information to patients. The ontology implemented begins with the, the exploration of semantic web applications, ontology design ,analysis and the use of ontological engineering in order information indexing and retrieval from and to electronic medical records. This ontology is one of other services to incorporate on current telemedicine systems.Sociedad Argentina de Informática e Investigación Operativ

    The Environmental Conditions, Treatments, and Exposures Ontology (ECTO): connecting toxicology and exposure to human health and beyond.

    Get PDF
    BACKGROUND: Evaluating the impact of environmental exposures on organism health is a key goal of modern biomedicine and is critically important in an age of greater pollution and chemicals in our environment. Environmental health utilizes many different research methods and generates a variety of data types. However, to date, no comprehensive database represents the full spectrum of environmental health data. Due to a lack of interoperability between databases, tools for integrating these resources are needed. In this manuscript we present the Environmental Conditions, Treatments, and Exposures Ontology (ECTO), a species-agnostic ontology focused on exposure events that occur as a result of natural and experimental processes, such as diet, work, or research activities. ECTO is intended for use in harmonizing environmental health data resources to support cross-study integration and inference for mechanism discovery. METHODS AND FINDINGS: ECTO is an ontology designed for describing organismal exposures such as toxicological research, environmental variables, dietary features, and patient-reported data from surveys. ECTO utilizes the base model established within the Exposure Ontology (ExO). ECTO is developed using a combination of manual curation and Dead Simple OWL Design Patterns (DOSDP), and contains over 2700 environmental exposure terms, and incorporates chemical and environmental ontologies. ECTO is an Open Biological and Biomedical Ontology (OBO) Foundry ontology that is designed for interoperability, reuse, and axiomatization with other ontologies. ECTO terms have been utilized in axioms within the Mondo Disease Ontology to represent diseases caused or influenced by environmental factors, as well as for survey encoding for the Personalized Environment and Genes Study (PEGS). CONCLUSIONS: We constructed ECTO to meet Open Biological and Biomedical Ontology (OBO) Foundry principles to increase translation opportunities between environmental health and other areas of biology. ECTO has a growing community of contributors consisting of toxicologists, public health epidemiologists, and health care providers to provide the necessary expertise for areas that have been identified previously as gaps

    Interoperability and FAIRness through a novel combination of Web technologies

    Get PDF
    Data in the life sciences are extremely diverse and are stored in a broad spectrum of repositories ranging from those designed for particular data types (such as KEGG for pathway data or UniProt for protein data) to those that are general-purpose (such as FigShare, Zenodo, Dataverse or EUDAT). These data have widely different levels of sensitivity and security considerations. For example, clinical observations about genetic mutations in patients are highly sensitive, while observations of species diversity are generally not. The lack of uniformity in data models from one repository to another, and in the richness and availability of metadata descriptions, makes integration and analysis of these data a manual, time-consuming task with no scalability. Here we explore a set of resource-oriented Web design patterns for data discovery, accessibility, transformation, and integration that can be implemented by any general- or special-purpose repository as a means to assist users in finding and reusing their data holdings. We show that by using off-the-shelf technologies, interoperability can be achieved atthe level of an individual spreadsheet cell. We note that the behaviours of this architecture compare favourably to the desiderata defined by the FAIR Data Principles, and can therefore represent an exemplar implementation of those principles. The proposed interoperability design patterns may be used to improve discovery and integration of both new and legacy data, maximizing the utility of all scholarly outputs

    Implementation of ontology for intelligent hospital ward

    Get PDF
    We have developed and implemented an ontology for an intelligent hospital ward. Our aim is to address the pervasiveness of computing applications in healthcare environments, which require: sharing of data across the hospital, including data generated by sensors and embedded in such environments, and dealing with semantic heterogeneity that exists across the hospital's data repositories. Our conceptual ontological model that supports such an environment has been implemented using semantic web tools and tested through the application developed with the J2EE technology

    XML Matchers: approaches and challenges

    Full text link
    Schema Matching, i.e. the process of discovering semantic correspondences between concepts adopted in different data source schemas, has been a key topic in Database and Artificial Intelligence research areas for many years. In the past, it was largely investigated especially for classical database models (e.g., E/R schemas, relational databases, etc.). However, in the latest years, the widespread adoption of XML in the most disparate application fields pushed a growing number of researchers to design XML-specific Schema Matching approaches, called XML Matchers, aiming at finding semantic matchings between concepts defined in DTDs and XSDs. XML Matchers do not just take well-known techniques originally designed for other data models and apply them on DTDs/XSDs, but they exploit specific XML features (e.g., the hierarchical structure of a DTD/XSD) to improve the performance of the Schema Matching process. The design of XML Matchers is currently a well-established research area. The main goal of this paper is to provide a detailed description and classification of XML Matchers. We first describe to what extent the specificities of DTDs/XSDs impact on the Schema Matching task. Then we introduce a template, called XML Matcher Template, that describes the main components of an XML Matcher, their role and behavior. We illustrate how each of these components has been implemented in some popular XML Matchers. We consider our XML Matcher Template as the baseline for objectively comparing approaches that, at first glance, might appear as unrelated. The introduction of this template can be useful in the design of future XML Matchers. Finally, we analyze commercial tools implementing XML Matchers and introduce two challenging issues strictly related to this topic, namely XML source clustering and uncertainty management in XML Matchers.Comment: 34 pages, 8 tables, 7 figure
    • …
    corecore