29 research outputs found

    CLINICAL DATA WAREHOUSE: A REVIEW

    Get PDF
    Clinical decisions are crucial because they are related to human lives. Thus, managers and decision makers inthe clinical environment seek new solutions that can support their decisions. A clinical data warehouse (CDW) is animportant solution that is used to achieve clinical stakeholders’ goals by merging heterogeneous data sources in a centralrepository and using this repository to find answers related to the strategic clinical domain, thereby supporting clinicaldecisions. CDW implementation faces numerous obstacles, starting with the data sources and ending with the tools thatview the clinical information. This paper presents a systematic overview of purpose of CDWs as well as the characteristics;requirements; data sources; extract, transform and load (ETL) process; security and privacy concerns; design approach;architecture; and challenges and difficulties related to implementing a successful CDW. PubMed and Google Scholarare used to find papers related to CDW. Among the total of 784 papers, only 42 are included in the literature review. Thesepapers are classified based on five perspectives, namely methodology, data, system, ETL tool and purpose, to findinsights related to aspects of CDW. This review can contribute answers to questions related to CDW and providerecommendations for implementing a successful CDW

    Design and implementation of serverless architecture for i2b2 on AWS cloud and Snowflake data warehouse

    Get PDF
    Informatics for Integrating Biology and the Beside (i2b2) is an open-source medical tool for cohort discovery that allows researchers to explore and query clinical data. The i2b2 platform is designed to adopt any patient-centric data models and used at over 400 healthcare institutions worldwide for querying patient data. The platform consists of a webclient, core servers and database. Despite having installation guidelines, the complex architecture of the system with numerous dependencies and configuration parameters makes it difficult to install a functional i2b2 platform. On the other hand, maintaining the scalability, security, availability of the application is also challenging and requires lot of resources. Our aim was to deploy the i2b2 for University of Missouri (UM) System in the cloud as well as reduce the complexity and effort of the installation and maintenance process. Our solution encapsulated the complete installation process of each component using docker and deployed the container in the AWS Virtual Private Cloud (VPC) using several AWS PaaS (Platform as a Service), IaaS (Infrastructure as a Service) services. We deployed the application as a service in the AWS FARGATE, an on-demand, serverless, auto scalable compute engine. We also enhanced the functionality of i2b2 services and developed Snowflake JDBC driver support for i2b2 backend services. It enabled i2b2 services to query directly from Snowflake analytical database. In addition, we also created i2b2-data-installer package to load PCORnet CDM and ACT ontology data into i2b2 database. The i2b2 platform in University of Missouri holds 1.26B facts of 2.2M patients of UM Cerner Millennium data.Includes bibliographical references

    Impact of Terminology Mapping on Population Health Cohorts IMPaCt

    Get PDF
    Background and Objectives: The population health care delivery model uses phenotype algorithms in the electronic health record (EHR) system to identify patient cohorts targeted for clinical interventions such as laboratory tests, and procedures. The standard terminology used to identify disease cohorts may contribute to significant variation in error rates for patient inclusion or exclusion. The United States requires EHR systems to support two diagnosis terminologies, the International Classification of Disease (ICD) and the Systematized Nomenclature of Medicine (SNOMED). Terminology mapping enables the retrieval of diagnosis data using either terminology. There are no standards of practice by which to evaluate and report the operational characteristics of ICD and SNOMED value sets used to select patient groups for population health interventions. Establishing a best practice for terminology selection is a step forward in ensuring that the right patients receive the right intervention at the right time. The research question is, “How does the diagnosis retrieval terminology (ICD vs SNOMED) and terminology map maintenance impact population health cohorts?” Aim 1 and 2 explore this question, and Aim 3 informs practice and policy for population health programs. Methods Aim 1: Quantify impact of terminology choice (ICD vs SNOMED) ICD and SNOMED phenotype algorithms for diabetes, chronic kidney disease (CKD), and heart failure were developed using matched sets of codes from the Value Set Authority Center. The performance of the diagnosis-only phenotypes was compared to published reference standard that included diagnosis codes, laboratory results, procedures, and medications. Aim 2: Measure terminology maintenance impact on SNOMED cohorts For each disease state, the performance of a single SNOMED algorithm before and after terminology updates was evaluated in comparison to a reference standard to identify and quantify cohort changes introduced by terminology maintenance. Aim 3: Recommend methods for improving population health interventions The socio-technical model for studying health information technology was used to inform best practice for the use of population health interventions. Results Aim 1: ICD-10 value sets had better sensitivity than SNOMED for diabetes (.829, .662) and CKD (.242, .225) (N=201,713, p Aim 2: Following terminology maintenance the SNOMED algorithm for diabetes increased in sensitivity from (.662 to .683 (p Aim 3: Based on observed social and technical challenges to population health programs, including and in addition to the development and measurement of phenotypes, a practical method was proposed for population health intervention development and reporting

    Data Infrastructure for Medical Research

    Get PDF
    While we are witnessing rapid growth in data across the sciences and in many applications, this growth is particularly remarkable in the medical domain, be it because of higher resolution instruments and diagnostic tools (e.g. MRI), new sources of structured data like activity trackers, the wide-spread use of electronic health records and many others. The sheer volume of the data is not, however, the only challenge to be faced when using medical data for research. Other crucial challenges include data heterogeneity, data quality, data privacy and so on. In this article, we review solutions addressing these challenges by discussing the current state of the art in the areas of data integration, data cleaning, data privacy, scalable data access and processing in the context of medical data. The techniques and tools we present will give practitioners — computer scientists and medical researchers alike — a starting point to understand the challenges and solutions and ultimately to analyse medical data and gain better and quicker insights

    The Translational Medicine Ontology and Knowledge Base: driving personalized medicine by bridging the gap between bench and bedside

    Get PDF
    Background: Translational medicine requires the integration of knowledge using heterogeneous data from health care to the life sciences. Here, we describe a collaborative effort to produce a prototype Translational Medicine Knowledge Base (TMKB) capable of answering questions relating to clinical practice and pharmaceutical drug discovery. Results: We developed the Translational Medicine Ontology (TMO) as a unifying ontology to integrate chemical, genomic and proteomic data with disease, treatment, and electronic health records. We demonstrate the use of Semantic Web technologies in the integration of patient and biomedical data, and reveal how such a knowledge base can aid physicians in providing tailored patient care and facilitate the recruitment of patients into active clinical trials. Thus, patients, physicians and researchers may explore the knowledge base to better understand therapeutic options, efficacy, and mechanisms of action. Conclusions: This work takes an important step in using Semantic Web technologies to facilitate integration of relevant, distributed, external sources and progress towards a computational platform to support personalized medicine. Availability: TMO can be downloaded from http://code.google.com/p/translationalmedicineontology and TMKB can be accessed at http://tm.semanticscience.org/sparql

    CLINICAL DATA WAREHOUSE: A REVIEW

    Get PDF
    - Clinical decisions are crucial because they are related to human lives. Thus, managers and decision makers in the clinical environment seek new solutions that can support their decisions. A clinical data warehouse (CDW) is an important solution that is used to achieve clinical stakeholders’ goals by merging heterogeneous data sources in a central repository and using this repository to find answers related to the strategic clinical domain, thereby supporting clinical decisions. CDW implementation faces numerous obstacles, starting with the data sources and ending with the tools that view the clinical information. This paper presents a systematic overview of purpose of CDWs as well as the characteristics; requirements; data sources; extract, transform and load (ETL) process; security and privacy concerns; design approach; architecture; and challenges and difficulties related to implementing a successful CDW. PubMed and Google Scholar are used to find papers related to CDW. Among the total of 784 papers, only 42 are included in the literature review. These papers are classified based on five perspectives, namely methodology, data, system, ETL tool and purpose, to find insights related to aspects of CDW. This review can contribute answers to questions related to CDW and provide recommendations for implementing a successful CDW

    Information Discovery on Electronic Health Records Using Authority Flow Techniques

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>As the use of electronic health records (EHRs) becomes more widespread, so does the need to search and provide effective information discovery within them. Querying by keyword has emerged as one of the most effective paradigms for searching. Most work in this area is based on traditional Information Retrieval (IR) techniques, where each document is compared individually against the query. We compare the effectiveness of two fundamentally different techniques for keyword search of EHRs.</p> <p>Methods</p> <p>We built two ranking systems. The traditional BM25 system exploits the EHRs' content without regard to association among entities within. The Clinical ObjectRank (CO) system exploits the entities' associations in EHRs using an authority-flow algorithm to discover the most relevant entities. BM25 and CO were deployed on an EHR dataset of the cardiovascular division of Miami Children's Hospital. Using sequences of keywords as queries, sensitivity and specificity were measured by two physicians for a set of 11 queries related to congenital cardiac disease.</p> <p>Results</p> <p>Our pilot evaluation showed that CO outperforms BM25 in terms of sensitivity (65% vs. 38%) by 71% on average, while maintaining the specificity (64% vs. 61%). The evaluation was done by two physicians.</p> <p>Conclusions</p> <p>Authority-flow techniques can greatly improve the detection of relevant information in EHRs and hence deserve further study.</p

    Intégration de ressources en recherche translationnelle : une approche unificatrice en support des systÚmes de santé "apprenants"

    Get PDF
    Learning health systems (LHS) are gradually emerging and propose a complimentary approach to translational research challenges by implementing close coupling of health care delivery, research and knowledge translation. To support coherent knowledge sharing, the system needs to rely on an integrated and efficient data integration platform. The framework and its theoretical foundations presented here aim at addressing this challenge. Data integration approaches are analysed in light of the requirements derived from LHS activities and data mediation emerges as the one most adapted for a LHS. The semantics of clinical data found in biomedical sources can only be fully derived by taking into account, not only information from the structural models (field X of table Y), but also terminological information (e.g. International Classification of Disease 10th revision) used to encode facts. The unified framework proposed here takes this into account. The platform has been implemented and tested in context of the TRANSFoRm endeavour, a European project funded by the European commission. It aims at developing a LHS including clinical activities in primary care. The mediation model developed for the TRANSFoRm project, the Clinical Data Integration Model, is presented and discussed. Results from TRANSFoRm use-cases are presented. They illustrate how a unified data sharing platform can support and enhance prospective research activities in context of a LHS. In the end, the unified mediation framework presented here allows sufficient expressiveness for the TRANSFoRm needs. It is flexible, modular and the CDIM mediation model supports the requirements of a primary care LHS.Les systĂšmes de santĂ© "apprenants" (SSA) prĂ©sentent une approche complĂ©mentaire et Ă©mergente aux problĂšmes de la recherche translationnelle en couplant de prĂšs les soins de santĂ©, la recherche et le transfert de connaissances. Afin de permettre un flot d’informations cohĂ©rent et optimisĂ©, le systĂšme doit se doter d’une plateforme intĂ©grĂ©e de partage de donnĂ©es. Le travail prĂ©sentĂ© ici vise Ă  proposer une approche de partage de donnĂ©es unifiĂ©e pour les SSA. Les grandes approches d’intĂ©gration de donnĂ©es sont analysĂ©es en fonction du SSA. La sĂ©mantique des informations cliniques disponibles dans les sources biomĂ©dicales est la rĂ©sultante des connaissances des modĂšles structurelles des sources mais aussi des connaissances des modĂšles terminologiques utilisĂ©s pour coder l’information. Les mĂ©canismes de la plateforme unifiĂ©e qui prennent en compte cette interdĂ©pendance sont dĂ©crits. La plateforme a Ă©tĂ© implĂ©mentĂ©e et testĂ©e dans le cadre du projet TRANSFoRm, un projet europĂ©en qui vise Ă  dĂ©velopper un SSA. L’instanciation du modĂšle de mĂ©diation pour le projet TRANSFoRm, le Clinical Data Integration Model est analysĂ©e. Sont aussi prĂ©sentĂ©s ici les rĂ©sultats d’un des cas d’utilisation de TRANSFoRm pour supporter la recherche afin de donner un aperçu concret de l’impact de la plateforme sur le fonctionnement du SSA. Au final, la plateforme unifiĂ©e d’intĂ©gration proposĂ©e ici permet un niveau d’expressivitĂ© suffisant pour les besoins de TRANSFoRm. Le systĂšme est flexible et modulaire et le modĂšle de mĂ©diation CDIM couvre les besoins exprimĂ©s pour le support des activitĂ©s d’un SSA comme TRANSFoRm

    Front-Line Physicians' Satisfaction with Information Systems in Hospitals

    Get PDF
    Day-to-day operations management in hospital units is difficult due to continuously varying situations, several actors involved and a vast number of information systems in use. The aim of this study was to describe front-line physicians' satisfaction with existing information systems needed to support the day-to-day operations management in hospitals. A cross-sectional survey was used and data chosen with stratified random sampling were collected in nine hospitals. Data were analyzed with descriptive and inferential statistical methods. The response rate was 65 % (n = 111). The physicians reported that information systems support their decision making to some extent, but they do not improve access to information nor are they tailored for physicians. The respondents also reported that they need to use several information systems to support decision making and that they would prefer one information system to access important information. Improved information access would better support physicians' decision making and has the potential to improve the quality of decisions and speed up the decision making process.Peer reviewe
    corecore