76,343 research outputs found

    Information architecture for a federated health record server

    Get PDF
    This paper describes the information models that have been used to implement a federated health record server and to deploy it in a live clinical setting. The authors, working at the Centre for Health Informatics and Multiprofessional Education (University College London), have built up over a decade of experience within Europe on the requirements and information models that are needed to underpin comprehensive multi-professional electronic health records. This work has involved collaboration with a wide range of health care and informatics organisations and partners in the healthcare computing industry across Europe though the EU Health Telematics projects GEHR, Synapses, EHCR-SupA, SynEx and Medicate. The resulting architecture models have fed into recent European standardisation work in this area, such as CEN TC/251 ENV 13606. UCL has implemented a federated health record server based on these models which is now running in the Department of Cardiovascular Medicine at the Whittington Hospital in North London. The information models described in this paper reflect a refinement based on this implementation experience

    Personalised approaches to antithrombotic therapies: insights from linked electronic health records

    Get PDF
    Antithrombotic drugs are increasingly used for the prevention of atherothrombotic events in cardiovascular diseases and represent a paradigm for the study of personalised medicine because of the need to balance potential benefits with the substantial risks of bleeding harms. To be effective, personalised medicine needs validated prognostic risk models, rich phenotypes, and patient monitoring over time. The opportunity to use linked electronic health records has potential advantages; we have rich longitudinal data spanning patients’ entire journey through the healthcare system including primary care visits, clinical biomarkers, hospital admissions, hospital procedures and prescribed medication. Challenges include structuring the data into research-ready format and accurately defining clinical endpoints and handling missing data. The data used in this thesis was from the CALIBER platform: linked routinely-collected electronic health records from general practices, hospitals admissions, myocardial infarction registry and death registry for 2 million patients in England from 1997 to 2010. In this thesis I (1) developed comprehensive bleeding phenotypes in linked electronic health records, (2) assessed the incidence and prognosis of bleeding in atrial fibrillation and coronary disease patients in England, (3) developed and validated prognostic models for atherothrombotic and bleeding events in stable myocardial infarction survivors pertaining to the benefits and harms of prolonged dual antiplatelet therapy, (4) assessed the predictors and outcomes associated with time in therapeutic range for patients treated with oral anticoagulants (5) assessed the predictive value of novel measures of international normalised ratio control in patients treated with oral anticoagulants for atherothrombotic and bleeding outcomes. Taken together these findings offer researchers scalable methodological approaches, that may be applied to other diseases and treatments with crucial benefits and harms considerations, and demonstrates how records used in clinical practice maybe harnessed to improve treatment decisions, monitoring and overall care of a cardiovascular disease population treated with a class of drugs

    Electronic health records to facilitate clinical research

    Get PDF
    Electronic health records (EHRs) provide opportunities to enhance patient care, embed performance measures in clinical practice, and facilitate clinical research. Concerns have been raised about the increasing recruitment challenges in trials, burdensome and obtrusive data collection, and uncertain generalizability of the results. Leveraging electronic health records to counterbalance these trends is an area of intense interest. The initial applications of electronic health records, as the primary data source is envisioned for observational studies, embedded pragmatic or post-marketing registry-based randomized studies, or comparative effectiveness studies. Advancing this approach to randomized clinical trials, electronic health records may potentially be used to assess study feasibility, to facilitate patient recruitment, and streamline data collection at baseline and follow-up. Ensuring data security and privacy, overcoming the challenges associated with linking diverse systems and maintaining infrastructure for repeat use of high quality data, are some of the challenges associated with using electronic health records in clinical research. Collaboration between academia, industry, regulatory bodies, policy makers, patients, and electronic health record vendors is critical for the greater use of electronic health records in clinical research. This manuscript identifies the key steps required to advance the role of electronic health records in cardiovascular clinical research

    Computer-Assisted versus Oral-and-Written History Taking for the Prevention and Management of Cardiovascular Disease: a Systematic Review of the Literature

    Get PDF
    Background and objectives: CVD is an important global healthcare issue; it is the leading cause of global mortality, with an increasing incidence identified in both developed and developing countries. It is also an extremely costly disease for healthcare systems unless managed effectively. In this review we aimed to: – Assess the effect of computer-assisted versus oral-and-written history taking on the quality of collected information for the prevention and management of CVD. – Assess the effect of computer-assisted versus oral-and-written history taking on the prevention and management of CVD. Methods: Randomised controlled trials that included participants of 16 years or older at the beginning of the study, who were at risk of CVD (prevention) or were either previously diagnosed with CVD (management). We searched all major databases. We assessed risk of bias using the Cochrane Collaboration tool. Results: We identified two studies. One comparing the two methods of history-taking for the prevention of cardiovascular disease n = 75. The study shows that generally the patients in the experimental group underwent more laboratory procedures, had more biomarker readings recorded and/or were given (or had reviewed), more dietary changes than the control group. The other study compares the two methods of history-taking for the management of cardiovascular disease (n = 479). The study showed that the computerized decision aid appears to increase the proportion of patients who responded to invitations to discuss CVD prevention with their doctor. The Computer-Assisted History Taking Systems (CAHTS) increased the proportion of patients who discussed CHD risk reduction with their doctor from 24% to 40% and increased the proportion who had a specific plan to reduce their risk from 24% to 37%. Discussion: With only one study meeting the inclusion criteria, for prevention of CVD and one study for management of CVD we did not gather sufficient evidence to address all of the objectives of the review. We were unable to report on most of the secondary patient outcomes in our protocol. Conclusions: We tentatively conclude that CAHTS can provide individually-tailored information about CVD prevention. However, further primary studies are needed to confirm these findings. We cannot draw any conclusions in relation to any other clinical outcomes at this stage. There is a need to develop an evidence base to support the effective development and use of CAHTS in this area of practice. In the absence of evidence on effectiveness, the implementation of computer-assisted history taking may only rely on the clinicians’ tacit knowledge, published monographs and viewpoint articles

    Evaluating openEHR for storing computable representations of electronic health record phenotyping algorithms

    Get PDF
    Electronic Health Records (EHR) are data generated during routine clinical care. EHR offer researchers unprecedented phenotypic breadth and depth and have the potential to accelerate the pace of precision medicine at scale. A main EHR use-case is creating phenotyping algorithms to define disease status, onset and severity. Currently, no common machine-readable standard exists for defining phenotyping algorithms which often are stored in human-readable formats. As a result, the translation of algorithms to implementation code is challenging and sharing across the scientific community is problematic. In this paper, we evaluate openEHR, a formal EHR data specification, for computable representations of EHR phenotyping algorithms.Comment: 30th IEEE International Symposium on Computer-Based Medical Systems - IEEE CBMS 201

    Development and validation of the DIabetes Severity SCOre (DISSCO) in 139 626 individuals with type 2 diabetes: a retrospective cohort study

    Get PDF
    OBJECTIVE: Clinically applicable diabetes severity measures are lacking, with no previous studies comparing their predictive value with glycated hemoglobin (HbA1c). We developed and validated a type 2 diabetes severity score (the DIabetes Severity SCOre, DISSCO) and evaluated its association with risks of hospitalization and mortality, assessing its additional risk information to sociodemographic factors and HbA1c. RESEARCH DESIGN AND METHODS: We used UK primary and secondary care data for 139 626 individuals with type 2 diabetes between 2007 and 2017, aged ≥35 years, and registered in general practices in England. The study cohort was randomly divided into a training cohort (n=111 748, 80%) to develop the severity tool and a validation cohort (n=27 878). We developed baseline and longitudinal severity scores using 34 diabetes-related domains. Cox regression models (adjusted for age, gender, ethnicity, deprivation, and HbA1c) were used for primary (all-cause mortality) and secondary (hospitalization due to any cause, diabetes, hypoglycemia, or cardiovascular disease or procedures) outcomes. Likelihood ratio (LR) tests were fitted to assess the significance of adding DISSCO to the sociodemographics and HbA1c models. RESULTS: A total of 139 626 patients registered in 400 general practices, aged 63±12 years were included, 45% of whom were women, 83% were White, and 18% were from deprived areas. The mean baseline severity score was 1.3±2.0. Overall, 27 362 (20%) people died and 99 951 (72%) had ≥1 hospitalization. In the training cohort, a one-unit increase in baseline DISSCO was associated with higher hazard of mortality (HR: 1.14, 95% CI 1.13 to 1.15, area under the receiver operating characteristics curve (AUROC)=0.76) and cardiovascular hospitalization (HR: 1.45, 95% CI 1.43 to 1.46, AUROC=0.73). The LR tests showed that adding DISSCO to sociodemographic variables significantly improved the predictive value of survival models, outperforming the added value of HbA1c for all outcomes. Findings were consistent in the validation cohort. CONCLUSIONS: Higher levels of DISSCO are associated with higher risks for hospital admissions and mortality. The new severity score had higher predictive value than the proxy used in clinical practice, HbA1c. This reproducible algorithm can help practitioners stratify clinical care of patients with type 2 diabetes

    Characterization of patients with idiopathic normal pressure hydrocephalus using natural language processing within an electronic healthcare record system

    Get PDF
    OBJECTIVE: Idiopathic normal pressure hydrocephalus (iNPH) is an underdiagnosed, progressive, and disabling condition. Early treatment is associated with better outcomes and improved quality of life. In this paper, the authors aimed to identify features associated with patients with iNPH using natural language processing (NLP) to characterize this cohort, with the intention to later target the development of artificial intelligence–driven tools for early detection. / METHODS: The electronic health records of patients with shunt-responsive iNPH were retrospectively reviewed using an NLP algorithm. Participants were selected from a prospectively maintained single-center database of patients undergoing CSF diversion for probable iNPH (March 2008–July 2020). Analysis was conducted on preoperative health records including clinic letters, referrals, and radiology reports accessed through CogStack. Clinical features were extracted from these records as SNOMED CT (Systematized Nomenclature of Medicine Clinical Terms) concepts using a named entity recognition machine learning model. In the first phase, a base model was generated using unsupervised training on 1 million electronic health records and supervised training with 500 double-annotated documents. The model was fine-tuned to improve accuracy using 300 records from patients with iNPH double annotated by two blinded assessors. Thematic analysis of the concepts identified by the machine learning algorithm was performed, and the frequency and timing of terms were analyzed to describe this patient group. / RESULTS: In total, 293 eligible patients responsive to CSF diversion were identified. The median age at CSF diversion was 75 years, with a male predominance (69% male). The algorithm performed with a high degree of precision and recall (F1 score 0.92). Thematic analysis revealed the most frequently documented symptoms related to mobility, cognitive impairment, and falls or balance. The most frequent comorbidities were related to cardiovascular and hematological problems. / CONCLUSIONS: This model demonstrates accurate, automated recognition of iNPH features from medical records. Opportunities for translation include detecting patients with undiagnosed iNPH from primary care records, with the aim to ultimately improve outcomes for these patients through artificial intelligence–driven early detection of iNPH and prompt treatment
    corecore