171 research outputs found

    A Core Reference Hierarchical Primitive Ontology for Electronic Medical Records Semantics Interoperability

    Get PDF
    Currently, electronic medical records (EMR) cannot be exchanged among hospitals, clinics, laboratories, pharmacies, and insurance providers or made available to patients outside of local networks. Hospital, laboratory, pharmacy, and insurance provider legacy databases can share medical data within a respective network and limited data with patients. The lack of interoperability has its roots in the historical development of electronic medical records. Two issues contribute to interoperability failure. The first is that legacy medical record databases and expert systems were designed with semantics that support only internal information exchange. The second is ontological commitment to the semantics of a particular knowledge representation language formalism. This research seeks to address these interoperability failures through demonstration of the capability of a core reference, hierarchical primitive ontological architecture with concept primitive attributes definitions to integrate and resolve non-interoperable semantics among and extend coverage across existing clinical, drug, and hospital ontologies and terminologies

    SNOMED CT standard ontology based on the ontology for general medical science

    Get PDF
    Background: Systematized Nomenclature of Medicine—Clinical Terms (SNOMED CT, hereafter abbreviated SCT) is acomprehensive medical terminology used for standardizing the storage, retrieval, and exchange of electronic healthdata. Some efforts have been made to capture the contents of SCT as Web Ontology Language (OWL), but theseefforts have been hampered by the size and complexity of SCT. Method: Our proposal here is to develop an upper-level ontology and to use it as the basis for defining the termsin SCT in a way that will support quality assurance of SCT, for example, by allowing consistency checks ofdefinitions and the identification and elimination of redundancies in the SCT vocabulary. Our proposed upper-levelSCT ontology (SCTO) is based on the Ontology for General Medical Science (OGMS). Results: The SCTO is implemented in OWL 2, to support automatic inference and consistency checking. Theapproach will allow integration of SCT data with data annotated using Open Biomedical Ontologies (OBO) Foundryontologies, since the use of OGMS will ensure consistency with the Basic Formal Ontology, which is the top-levelontology of the OBO Foundry. Currently, the SCTO contains 304 classes, 28 properties, 2400 axioms, and 1555annotations. It is publicly available through the bioportal athttp://bioportal.bioontology.org/ontologies/SCTO/. Conclusion: The resulting ontology can enhance the semantics of clinical decision support systems and semanticinteroperability among distributed electronic health records. In addition, the populated ontology can be used forthe automation of mobile health applications

    Master of Science

    Get PDF
    thesisData quality has become a significant issue in healthcare as large preexisting databases are integrated to provide greater depth for research and process improvement. Large scale data integration exposes and compounds data quality issues latent in source systems. Although the problems related to data quality in transactional databases have been identified and well-addressed, the application of data quality constraints to large scale data repositories has not and requires novel applications of traditional concepts and methodologies. Despite an abundance of data quality theory, tools and software, there is no consensual technique available to guide developers in the identification of data integrity issues and the application of data quality rules in warehouse-type applications. Data quality measures are frequently developed on an ad hoc basis or methods designed to assure data quality in transactional systems are loosely applied to analytic data stores. These measures are inadequate to address the complex data quality issues in large, integrated data repositories particularly in the healthcare domain with its heterogeneous source systems. This study derives a taxonomy of data quality rules from relational database theory. It describes the development and implementation of data quality rules in the Analytic Health Repository at Intermountain Healthcare and situates the data quality rules in the taxonomy. Further, it identifies areas in which more rigorous data quality iv should be explored. This comparison demonstrates the superiority of a structured approach to data quality rule identification

    Doctor of Philosophy

    Get PDF
    dissertationPublic health reporting is an important source of information for public health investigation and surveillance, which are necessary for the prevention and control of disease. There are two important problems with the current public health reporting process in the United States: (a) the reporting specifications are unstructured and are communicated with reporting facilities using nonstandard public health department Web sites and (b) most reporting facilities transmit reports to public health entities using manual and paper-based processes. Our research focuses on the development and evaluation of new strategies to improve the public health reporting process by addressing these problems. To improve the communication of public health reporting specifications by public health authorities, we: (a) examined the business process of a laboratory complying with the reporting requirements, (b) evaluated public health department Websites to understand the problems faced by reporting facilities while accessing the reporting specifications, (c) identified the content requirements of a knowledge management system for public health reporting specifications, (d) designed the representation of the public health reporting specifications, and (e) evaluated the content and design using a prototype web-based query system for public health reporting specifications. To improve the transmission of case reports from healthcare facilities to public health entities, we: (a) described public health workflow associated with the management of case reports, (b) identified the content of a case report to meet the needs of public health authorities, (c) modeled the case report using Health Level Seven (HL7) v2.5.1, and (d) evaluated the electronic case reports by comparing the timeliness, completeness of information content, and the completeness of the electronic reporting process with the paper-based reporting processes. We demonstrated a model for public health reporting specifications using a prototype web-based query system. The evaluation conducted with users from laboratories, healthcare facilities, and public health entities showed that the proposed model met most of the users' needs and requirements. We also identified variation in the reporting specifications, some of which could be standardized to improve reporting compliance. We implemented HL7 v2.5.1 case reports from Intermountain Healthcare hospitals to the Utah Department of Health. The electronic reports transmitted from the Intermountain hospitals were more timely (median delay: 2 days) than the paper reports sent from other clinical facilities (median delay: 3.5 days) but less timely than the paper reports from Intermountain laboratories (median: 1 day). However, the evaluation of the completeness of data elements needed for public health triage prior to investigation showed that electronic case reports from Intermountain hospitals included more complete information than paper reports from Intermountain laboratories. Even though the paper reports from Intermountain laboratories were more timely, the incomplete reports may delay investigation. There are informatics opportunities and public health needs to improve both electronic laboratory reporting and electronic case reporting

    Doctor of Philosophy

    Get PDF
    dissertationControlled clinical terminologies are essential to realizing the benefits of electronic health record systems. However, implementing consistent and sustainable use of terminology has proven to be both intellectually and practically challenging. First, this project derives a conceptual understanding of the scope and intricacies of the challenge by applying informatics principles, practical experience, and real-world requirements. Equipped with this understanding, various approaches are explored and from this analysis a unique solution is defined. Finally, a working environment that meets the requirements for creating, maintaining, and distributing terminologies was created and evaluated

    Extensions of SNOMED taxonomy abstraction networks supporting auditing and complexity analysis

    Get PDF
    The Systematized Nomenclature of Medicine – Clinical Terms (SNOMED CT) has been widely used as a standard terminology in various biomedical domains. The enhancement of the quality of SNOMED contributes to the improvement of the medical systems that it supports. In previous work, the Structural Analysis of Biomedical Ontologies Center (SABOC) team has defined the partial-area taxonomy, a hierarchical abstraction network consisting of units called partial-areas. Each partial-area comprises a set of SNOMED concepts exhibiting a particular relationship structure and being distinguished by a unique root concept. In this dissertation, some extensions and applications of the taxonomy framework are considered. Some concepts appearing in multiple partial-areas have been designated as complex due to the fact that they constitute a tangled portion of a hierarchy and can be obstacles to users trying to gain an understanding of the hierarchy’s content. A methodology for partitioning the entire collection of these so-called overlapping complex concepts into singly-rooted groups was presented. A novel auditing methodology based on an enhanced abstraction network is described. In addition, the existing abstraction network relies heavily on the structure of the outgoing relationships of the concepts. But some of SNOMED hierarchies (or subhierarchies) serve only as targets of relationships, with few or no outgoing relationships of their own. This situation impedes the applicability of the abstraction network. To deal with this problem, a variation of the above abstraction network, called the converse abstraction network (CAN) is defined and derived automatically from a given SNOMED hierarchy. An auditing methodology based on the CAN is formulated. Furthermore, a preliminary study of the complementary use of the abstraction network in description logic (DL) for quality assurance purposes pertaining to SNOMED is presented. Two complexity measures, a structural complexity measure and a hierarchical complexity measure, based on the abstraction network are introduced to quantify the complexity of a SNOMED hierarchy. An extension of the two measures is also utilized specifically to track the complexity of the versions of the SNOMED hierarchies before and after a sequence of auditing processes

    A Model for a Data Dictionary Supporting Multiple Definitions, Views and Contexts

    Get PDF
    Auf dem Gebiet der Klinischen Studien sind präzise Begriffsdefinitionen äußerst wichtig, um eine objektive Datenerfassung und -auswertung zu gewährleisten. Zudem ermöglichen sie externen Experten die Forschungsergebnisse korrekt zu interpretieren und anzuwenden. Allerdings weisen viele Klinische Studien Defizite in diesem Punkt auf: Definitionen sind oft ungenau oder werden implizit verwendet. Außerdem sind Begriffe oft uneinheitlich definiert, obwohl standardisierte Definitionen im Hinblick auf einen weitreichenderen Austausch von Ergebnissen wünschenswert sind. Vor diesem Hintergrund entstand die Idee des Data Dictionary, dessen Ziel zunächst darin besteht, die Definitionsalternativen von Begriffen zu sammeln und Klinischen Studien zur Verfügung zu stellen. Zusätzlich soll die Analyse der Definitionen in Bezug auf ihre Gemeinsamkeiten und Unterschiede sowie deren Harmonisierung unterstützt werden. Standardisierte Begriffsdefinitionen werden jedoch nicht erzwungen, da die Unterschiede in Definitionen inhaltlich gerechtfertigt sein können, z.B. aufgrund der Verwendung in unterschiedlichen Fachgebieten, durch studienspezifische Bedingungen oder verschiedene Expertensichten. In der vorliegenden Arbeit wird ein Modell für das Data Dictionary entwickelt. Das entwickelte Modell folgt dem aus der Terminologie bekannten konzept-basierten Ansatz und erweitert diesen um die Möglichkeit der Repräsentation alternativer Definitionen. Insbesondere wird hierbei angestrebt, die Unterschiede in den Definitionen möglichst genau zu explizieren, um zwischen inhaltlich verschiedenen Definitionsalternativen (z.B. sich wider-sprechenden Expertenmeinungen) und konsistenten Varianten einer inhaltlichen Definition (z.B. verschiedene Sichten, Übersetzungen in verschiedene Sprachen) unterscheiden zu können. Mehrere Modellelemente widmen sich zudem der Explizierung von kontextuellen Informationen (z.B. der Gültigkeit innerhalb von Organisationen oder der Domäne zu der ein Konzept gehört), um die Auswahl und Wiederverwendung von Definitionen zu unterstützen. Diese Informationen erlauben verschiedene Sichten auf die Inhalte des Data Dictionary. Sichten werden dabei als kohärente Teilmengen des Data Dictionary betrachtet, die nur diejenigen Inhalte umfassen, die als relevant im ausgewählten Kontext spezifiziert sind

    Persistência de dados e-health num sistema de gestão de ensaios clínicos

    Get PDF
    The present thesis details the development of a platform belonging to a project focused on the development and commercialization of Smart Health solutions in Portugal. The platform in question aims to facilitate the integration testing and clinical trial of medical devices, utilizing a standard to establish a uniform set of semantics and nomenclatures and provide proper protection and storage of the data. For the development of the project, research was made into the matters of clinical trials, regulatory norms and standards capable of applying them, with HL7 FHIR being the clear standout. Furthermore, a value analysis of the idea of the platform was conducted, highlighting the interest for it and the benefits it might bring. This was followed by the proper analysis and design of the platform, which entailed not only documenting the context under which it would operate (its architecture, through multiple views), but also the various concepts and functionalities that were to be considered (its use cases and domain). The implementation of these ideas was also exposed, with the support of code snippets, and its consequent experimentation and evaluation was done to confirm its capacity in data storage and the efficiency in its performance when faced with rigorous data sets.A presente tese está focada no desenvolvimento de uma plataforma de persistência e gestão de dados clínicos. Este projeto enquadra-se na iniciativa denominada SMART-HEALTH4-ALL, que visa promover a conceptualização, desenvolvimento e comercialização de tecnologias Smart Health em Portugal. A plataforma em questão vai servir como ferramenta integral de suporte, no contexto de ensaios clínicos. Estes ensaios envolvem a troca de informação entre vários aparelhos Smart Health e outras entidades que vão executar revisões estatísticas, de forma a validar o correto funcionamento dos aparelhos e permitir que estes sejam aprovados para desenvolvimento em massa. Um dos maiores obstáculos enfrentados durante os ensaios clínicos é elevada disparidade na forma como os dados clínicos estão estruturados e designados nos aparelhos e sistemas envolvidos (tanto os que se sujeitam ao ensaio, como os que realizam as provas desse ensaio), levando ao uso de uma grande quantidade de tempo e recursos para uniformizar estas estruturas e possibilitar a correta execução dos ensaios. A plataforma proposta tem como objetivo facilitar e acelerar esta etapa no desenvolvimento de tecnologias médicas, propiciando o envio e receção dos dados clínicos, servindo como localização singular onde estas trocas de dados ocorrerão. De forma a prevenir a referida inconsistência nas definições e estruturações dos dados clínicos, a plataforma terá também a responsabilidade de fornecer uma definição uniforme de semânticas e nomenclaturas para a estruturação dos dados clínicos, promovendo a proteção e correta gestão dos mesmos. Para o desenvolvimento desta, investigações foram elaboradas, através de revisões de literatura, de forma a definir um estado de arte sobre os tópicos de ensaios clínicos (em que consistem e qual a sua importância e impacto no desenvolvimento tecnológico clínico), normas regulatórias (legais e de ética, que devem ser respeitadas e suportadas por ensaios clínicos e tecnologias usadas nas suas execuções), sobre gestão de dados clínicos e sobre standards de estruturação de dados capazes de respeitar essas normas e serem usados na implementação da plataforma. O estado de arte contém em particular uma análise significativa de uma série de standards, considerando o seu contexto histórico, as suas vantagens e desvantagens e a compatibilidade com a plataforma a desenvolver, com o standard denominado HL7 FHIR sendo o que mais se destacou e que provou ser o mais adequado para esta situação particular. Consequentemente, foi executada uma análise de valor à ideia da plataforma. Esta etapa envolveu uma definição mais concreta da ideia e das oportunidades que levaram à criação da ideia em si, através de pesquisas bibliográficas sobre testemunhos e estatísticas relevantes. Tendo isto, o valor da ideia foi determinado atráves do interesse por detrás da ideia e os vários benefícios que esta poderia trazer para as entidades que desfrutassem dela. De seguida, foram elaboradas as fases de análise e desenho de engenharia da plataforma. Iniciou-se documentando os vários conceitos e funcionalidades que iriam compor a plataforma em si, atráves de diagramas de domínio e da definição dos seus casos de uso, respetivamente. O desenho envolveu o planeamento e previsão dos contextos em que a plataforma se iria encaixar, com que outros componentes iria comunicar e como a transferência de informação ocorreria ao longo das suas camadas, ou seja, a potencial arquitetura do projeto, conseguida atráves de diagramas de sequência, físicos e de componentes, que ilustravam as várias vistas a considerar. Tendo esta documentação efetuada e a análise e desenho concluídas, procedeu-se à exposição da implementação da plataforma em si, que se iniciou com a exploração de quais tecnologias usar (linguagem Java, framework de Spring e especificações de HL7 FHIR), procedendo posteriormente à detalhada série de passos para a configuração dos dois componentes chave da plataforma: o servidor HAPI FHIR e a API de gestão de dados. Para ambos, os passos a tomar para os preparar e executar foram apresentados com o auxílio de extratos de código e explicações dos mesmos, de forma a que qualquer indíviduo, mesmo que com conhecimento limitado de codificação, pudesse acompanhar o processo. Por conseguinte, tendo a plataforma completamente implementada, realizou-se a fase de experimentação e avaliação da solução. Para tal, determinaram-se inicialmente os indicadores (os aspetos da plataforma que iam ser avaliados). O primeiro destes indicadores foi descrito como a capacidade de corretamente persitir e gerir os dados clínicos. O segundo e terceiro indicador relacionaram-se com a performance da plataforma quando exposta a uma série consecutiva de dados clínicos e a agrupados (bundles) destes dados de grande dimensão, respetivamente. Para cada indicador foram estabelecidas duas hipóteses, uma com perspetiva positiva perante os resultados obtidos e outra com perspetiva negativa. Com isto, executaram-se as experiências através de simulações de pedidos HTTP à plataforma e uma série de dados foram recolhidos para se poder executar testes estatísticos perante os mesmos e através dos seus resultados, rejeitar ou aceitar as hipóteses previamente expostas e consequentemente avaliar as condições da plataforma após implementação. A avaliação resultou numa apreciação positiva perante todos os indicadores, levando à conclusão de que a plataforma ia de encontro com as expetativas que lhe foram designadas anteriormente. Por fim, foi elaborada uma conclusão que refletia sobre o trabalho desenvolvido (considerando que todos os objetivos da plataforma foram atingidos), o trabalho futuro (breves melhorias e sugestões) e uma apreciação pessoal sobre o projeto

    Aligning an interface terminology to the Logical Observation Identifiers Names and Codes (LOINC((R)))

    Get PDF
    OBJECTIVE: Our study consists in aligning the interface terminology of the Bordeaux university hospital (TLAB) to the Logical Observation Identifiers Names and Codes (LOINC). The objective was to facilitate the shared and integrated use of biological results with other health information systems. MATERIALS AND METHODS: We used an innovative approach based on a decomposition and re-composition of LOINC concepts according to the transversal relations that may be described between LOINC concepts and their definitional attributes. TLAB entities were first anchored to LOINC attributes and then aligned to LOINC concepts through the appropriate combination of definitional attributes. Finally, using laboratory results of the Bordeaux data-warehouse, an instance-based filtering process has been applied. RESULTS: We found a small overlap between the tokens constituting the labels of TLAB and LOINC. However, the TLAB entities have been easily aligned to LOINC attributes. Thus, 99.8% of TLAB entities have been related to a LOINC analyte and 61.0% to a LOINC system. A total of 55.4% of used TLAB entities in the hospital data-warehouse have been mapped to LOINC concepts. We performed a manual evaluation of all 1-1 mappings between TLAB entities and LOINC concepts and obtained a precision of 0.59. CONCLUSION: We aligned TLAB and LOINC with reasonable performances, given the poor quality of TLAB labels. In terms of interoperability, the alignment of interface terminologies with LOINC could be improved through a more formal LOINC structure. This would allow queries on LOINC attributes rather than on LOINC concepts only
    corecore