123 research outputs found

    Establishment of Requirements and Methodology for the Development and Implementation of GreyMatters, a Memory Clinic Information System

    Get PDF
    INTRODUCTION: The aim of the paper is to establish the requirements and methodology for the development process of GreyMatters, a memory clinic system, outlining the conceptual, practical, technical and ethical challenges, and the experiences of capturing clinical and research oriented data along with the implementation of the system. METHODS: The methodology for development of the information system involved phases of requirements gathering, modeling and prototype creation, and 'bench testing' the prototype with experts. The standard Institute of Electrical and Electronics Engineers (IEEE) recommended approach for the specifications of software requirements was adopted. An electronic health record (EHR) standard, EN13606 was used, and clinical modelling was done through archetypes and the project complied with data protection and privacy legislation. RESULTS: The requirements for GreyMatters were established. Though the initial development was complex, the requirements, methodology and standards adopted made the construction, deployment, adoption and population of a memory clinic and research database feasible. The electronic patient data including the assessment scales provides a rich source of objective data for audits and research and to establish study feasibility and identify potential participants for the clinical trials. CONCLUSION: The establishment of requirements and methodology, addressing issues of data security and confidentiality, future data compatibility and interoperability and medico-legal aspects such as access controls and audit trails, led to a robust and useful system. The evaluation supports that the system is an acceptable tool for clinical, administrative, and research use and forms a useful part of the wider information architecture

    Model-driven approach to data collection and reporting for quality improvement

    Get PDF
    Continuous data collection and analysis have been shown essential to achieving improvement in healthcare. However, the data required for local improvement initiatives are often not readily available from hospital Electronic Health Record (EHR) systems or not routinely collected. Furthermore, improvement teams are often restricted in time and funding thus requiring inexpensive and rapid tools to support their work. Hence, the informatics challenge in healthcare local improvement initiatives consists of providing a mechanism for rapid modelling of the local domain by non-informatics experts, including performance metric definitions, and grounded in established improvement techniques. We investigate the feasibility of a model-driven software approach to address this challenge, whereby an improvement data model designed by a team is used to automatically generate required electronic data collection instruments and reporting tools. To that goal, we have designed a generic Improvement Data Model (IDM) to capture the data items and quality measures relevant to the project, and constructed Web Improvement Support in Healthcare (WISH), a prototype tool that takes user-generated IDM models and creates a data schema, data collection web interfaces, and a set of live reports, based on Statistical Process Control (SPC) for use by improvement teams. The software has been successfully used in over 50 improvement projects, with more than 700 users. We present in detail the experiences of one of those initiatives, Chronic Obstructive Pulmonary Disease project in Northwest London hospitals. The specific challenges of improvement in healthcare are analysed and the benefits and limitations of the approach are discussed

    Using interpretable machine learning to predict bloodstream infection and antimicrobial resistance in patients admitted to ICU: Early alert predictors based on EHR data to guide antimicrobial stewardship

    Get PDF
    Nosocomial infections and Antimicrobial Resistance (AMR) stand as formidable healthcare challenges on a global scale. To address these issues, various infection control protocols and personalized treatment strategies, guided by laboratory tests, aim to detect bloodstream infections (BSI) and assess the potential for AMR. In this study, we introduce a machine learning (ML) approach based on Multi-Objective Symbolic Regression (MOSR), an evolutionary approach to create ML models in the form of readable mathematical equations in a multi-objective way to overcome the limitation of standard single-objective approaches. This method leverages readily available clinical data collected upon admission to intensive care units, with the goal of predicting the presence of BSI and AMR. We further assess its performance by comparing it to established ML algorithms using both naturally imbalanced real-world data and data that has been balanced through oversampling techniques. Our findings reveal that traditional ML models exhibit subpar performance across all training scenarios. In contrast, MOSR, specifically configured to minimize false negatives by optimizing also for the F1-Score, outperforms other ML algorithms and consistently delivers reliable results, irrespective of the training set balance with F1-Score.22 and.28 higher than any other alternative. This research signifies a promising path forward in enhancing Antimicrobial Stewardship (AMS) strategies. Notably, the MOSR approach can be readily implemented on a large scale, offering a new ML tool to find solutions to these critical healthcare issues affected by limited data availability

    Tonsillectomy among children with low baseline acute throat infection consultation rates in UK general practices: a cohort study.

    Get PDF
    OBJECTIVE: To investigate the effectiveness of tonsillectomy in reducing acute throat infection (ATI) consultation rates over 6 years' follow-up among children with low baseline ATI consultation rates. DESIGN: Retrospective cohort study. SETTING: UK general practices from the Clinical Practice Research Datalink. PARTICIPANTS: Children aged 4-15 years with ≤3 ATI consultations during the 3 years prior to 2001 (baseline). 450 children who underwent tonsillectomy (tonsillectomy group) and 13 442 other children with an ATI consultation (comparison group) in 2001. MAIN OUTCOME MEASURES: Mean differences in ATI consultation rates over the first 3 years' and subsequent 3 years' follow-up compared with 3 years prior to 2001 (baseline); odds of ≥3 ATI consultations at the same time points. RESULTS: Among children in the tonsillectomy group, the 3-year mean ATI consultation rate decreased from 1.31 to 0.66 over the first 3 years' follow-up and further declined to 0.60 over the subsequent 3 years' follow-up period. Compared with children who had no operation, those who underwent tonsillectomy experienced a reduction in 3-year mean ATI consultations per child of 2.5 (95% CI 2.3 to 2.6, p<0.001) over the first 3 years' follow-up, but only 1.2 (95% CI 1.0 to 1.4, p<0.001) over the subsequent 3 years' follow-up compared with baseline, respectively. This equates to a mean reduction of 3.7 ATI consultations over a 6-year period and approximates to a mean annual reduction of 0.6 ATI consultations per child, per year, over 6 years' follow-up. Children who underwent tonsillectomy were also much less likely to experience ≥3 ATI consultations during the first 3 years' follow-up (adjusted OR=0.12, 95% CI 0.08 to 0.17) and the subsequent 3 years' follow-up (adjusted OR=0.24, 95% CI 0.14 to 0.41). CONCLUSIONS: Among children with low baseline ATI rates, there was a statistically significant reduction in ATI consultation rates over 6 years' follow-up. However, the relatively modest clinical benefit needs to be weighed against the potential risks and complications associated with surgery

    TRANSFoRm eHealth solution for quality of life monitoring.

    Get PDF
    Patient Recorded Outcome Measures (PROMs) are an essential part of quality of life monitoring, clinical trials, improvement studies and other medical tasks. Recently, web and mobile technologies have been explored as means of improving the response rates and quality of data collected. Despite the potential benefit of this approach, there are currently no widely accepted standards for developing or implementing PROMs in CER (Comparative Effectiveness Research). Within the European Union project Transform (Translational Research and Patient Safety in Europe) an eHealth solution for quality of life monitoring has been developed and validated. This paper presents the overall architecture of the system as well as a detailed description of the mobile and web applications

    Real-world effectiveness of steroids in severe COVID-19: a retrospective cohort study

    Get PDF
    Introduction: Randomised controlled trials have shown that steroids reduce the risk of dying in patients with severe Coronavirus disease 2019 (COVID-19), whilst many real-world studies have failed to replicate this result. We aim to investigate real-world effectiveness of steroids in severe COVID-19. Methods: Clinical, demographic, and viral genome data extracted from electronic patient record (EPR) was analysed from all SARS-CoV-2 RNA positive patients admitted with severe COVID-19, defined by hypoxia at presentation, between March 13th 2020 and May 27th 2021. Steroid treatment was measured by the number of prescription-days with dexamethasone, hydrocortisone, prednisolone or methylprednisolone. The association between steroid > 3 days treatment and disease outcome was explored using multivariable cox proportional hazards models with adjustment for confounders (including age, gender, ethnicity, co-morbidities and SARS-CoV-2 variant). The outcome was in-hospital mortality. Results: 1100 severe COVID-19 cases were identified having crude hospital mortality of 15.3%. 793/1100 (72.1%) individuals were treated with steroids and 513/1100 (46.6%) received steroid ≤ 3 days. From the multivariate model, steroid > 3 days was associated with decreased hazard of in-hospital mortality (HR: 0.47 (95% CI: 0.31–0.72)). Conclusion: The protective effect of steroid treatment for severe COVID-19 reported in randomised clinical trials was replicated in this retrospective study of a large real-world cohort

    Requirements and validation of a prototype learning health system for clinical diagnosis

    Get PDF
    Introduction Diagnostic error is a major threat to patient safety in the context of family practice. The patient safety implications are severe for both patient and clinician. Traditional approaches to diagnostic decision support have lacked broad acceptance for a number of well-documented reasons: poor integration with electronic health records and clinician workflow, static evidence that lacks transparency and trust, and use of proprietary technical standards hindering wider interoperability. The learning health system (LHS) provides a suitable infrastructure for development of a new breed of learning decision support tools. These tools exploit the potential for appropriate use of the growing volumes of aggregated sources of electronic health records. Methods We describe the experiences of the TRANSFoRm project developing a diagnostic decision support infrastructure consistent with the wider goals of the LHS. We describe an architecture that is model driven, service oriented, constructed using open standards, and supports evidence derived from electronic sources of patient data. We describe the architecture and implementation of 2 critical aspects for a successful LHS: the model representation and translation of clinical evidence into effective practice and the generation of curated clinical evidence that can be used to populate those models, thus closing the LHS loop. Results/Conclusions Six core design requirements for implementing a diagnostic LHS are identified and successfully implemented as part of this research work. A number of significant technical and policy challenges are identified for the LHS community to consider, and these are discussed in the context of evaluating this work: medico-legal responsibility for generated diagnostic evidence, developing trust in the LHS (particularly important from the perspective of decision support), and constraints imposed by clinical terminologies on evidence generation

    Desiderata for the development of next-generation electronic health record phenotype libraries

    Get PDF
    BackgroundHigh-quality phenotype definitions are desirable to enable the extraction of patient cohorts from large electronic health record repositories and are characterized by properties such as portability, reproducibility, and validity. Phenotype libraries, where definitions are stored, have the potential to contribute significantly to the quality of the definitions they host. In this work, we present a set of desiderata for the design of a next-generation phenotype library that is able to ensure the quality of hosted definitions by combining the functionality currently offered by disparate tooling.MethodsA group of researchers examined work to date on phenotype models, implementation, and validation, as well as contemporary phenotype libraries developed as a part of their own phenomics communities. Existing phenotype frameworks were also examined. This work was translated and refined by all the authors into a set of best practices.ResultsWe present 14 library desiderata that promote high-quality phenotype definitions, in the areas of modelling, logging, validation, and sharing and warehousing.ConclusionsThere are a number of choices to be made when constructing phenotype libraries. Our considerations distil the best practices in the field and include pointers towards their further development to support portable, reproducible, and clinically valid phenotype design. The provision of high-quality phenotype definitions enables electronic health record data to be more effectively used in medical domains

    Desiderata for the development of next-generation electronic health record phenotype libraries

    Get PDF
    Background High-quality phenotype definitions are desirable to enable the extraction of patient cohorts from large electronic health record repositories and are characterized by properties such as portability, reproducibility, and validity. Phenotype libraries, where definitions are stored, have the potential to contribute significantly to the quality of the definitions they host. In this work, we present a set of desiderata for the design of a next-generation phenotype library that is able to ensure the quality of hosted definitions by combining the functionality currently offered by disparate tooling. Methods A group of researchers examined work to date on phenotype models, implementation, and validation, as well as contemporary phenotype libraries developed as a part of their own phenomics communities. Existing phenotype frameworks were also examined. This work was translated and refined by all the authors into a set of best practices. Results We present 14 library desiderata that promote high-quality phenotype definitions, in the areas of modelling, logging, validation, and sharing and warehousing. Conclusions There are a number of choices to be made when constructing phenotype libraries. Our considerations distil the best practices in the field and include pointers towards their further development to support portable, reproducible, and clinically valid phenotype design. The provision of high-quality phenotype definitions enables electronic health record data to be more effectively used in medical domains

    A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm

    Get PDF
    Objective Biomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method. Materials and methods We developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures. Results Our unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project. Conclusions We present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration
    corecore