11,105 research outputs found

    Design and implementation of a standardized framework to generate and evaluate patient-level prediction models using observational healthcare data

    Get PDF
    Objective: To develop a conceptual prediction model framework containing standardized steps and describe the corresponding open-source software developed to consistently implement the framework across computational environments and observational healthcare databases to enable model sharing and reproducibility. Methods: Based on existing best practices we propose a 5 step standardized framework for: (1) transparently defining the problem; (2) selecting suitable datasets; (3) constructing variables from the observational data; (4) learning the predictive model; and (5) validating the model performance. We implemented this framework as open-source software utilizing the Observational Medical Outcomes Partnership Common Data Model to enable convenient sharing of models and reproduction of model evaluation across multiple observational datasets. The software implementation contains default covariates and classifiers but the framework enables customization and extension. Results: As a proof-of-concept, demonstrating the transparency and ease of model dissemination using the software, we developed prediction models for 21 different outcomes within a target population of people suffering from depression across 4 observational databases. All 84 models are available in an accessible online repository to be implemented by anyone with access to an observational database in the Common DataModel format. Conclusions: The proof-of-concept study illustrates the framework's ability to develop reproducible models that can be readily shared and offers the potential to perform extensive external validation of models, and improve their likelihood of clinical uptake. In future work the framework will be applied to perform an "all-by-all" prediction analysis to assess the observational data prediction domain across numerous target populations, outcomes and time, and risk settings

    A model not a prophet:Operationalising patient-level prediction using observational data networks

    Get PDF
    Improving prediction model developement and evaluation processes using observational health data

    A model not a prophet:Operationalising patient-level prediction using observational data networks

    Get PDF
    Improving prediction model developement and evaluation processes using observational health data

    Predicting the Risk of Falling with Artificial Intelligence

    Get PDF
    Predicting the Risk of Falling with Artificial Intelligence Abstract Background: Fall prevention is a huge patient safety concern among all healthcare organizations. The high prevalence of patient falls has grave consequences, including the cost of care, longer hospital stays, unintentional injuries, and decreased patient and staff satisfaction. Preventing a patient from falling is critical in maintaining a patient’s quality of life and averting the high cost of healthcare expenses. Local Problem: Two hospitals\u27 healthcare system saw a significant increase in inpatient falls. The fall rate is one of the nursing quality indicators, and fall reduction is a key performance indicator of high-quality patient care. Methods: This quality improvement evidence-based observational project compared the rate of fall (ROF) between the experimental and control unit. Pearson’s chi-square and Fisher’s exact test were used to analyze and compare results. Qualtrics surveys evaluated the nurses’ perception of AI, and results were analyzed using the Mann-Whitney Rank Sum test. Intervention. Implementing an artificial intelligence-assisted fall predictive analytics model that can timely and accurately predict fall risk can mitigate the increase in inpatient falls. Results: The pilot unit (Pearson’s chi-square = p pp\u3c0.001). Conclusions: AI-assisted automatic fall predictive risk assessment produced a significant reduction if the number of falls, the ROF, and the use of fall countermeasures. Further, nurses’ perception of AI improved after the introduction of FPAT and presentation

    A standardized analytics pipeline for reliable and rapid development and validation of prediction models using observational health data

    Get PDF
    Background and objective: As a response to the ongoing COVID-19 pandemic, several prediction models in the existing literature were rapidly developed, with the aim of providing evidence-based guidance. However, none of these COVID-19 prediction models have been found to be reliable. Models are commonly assessed to have a risk of bias, often due to insufficient reporting, use of non-representative data, and lack of large-scale external validation. In this paper, we present the Observational Health Data Sciences and Informatics (OHDSI) analytics pipeline for patient-level prediction modeling as a standardized approach for rapid yet reliable development and validation of prediction models. We demonstrate how our analytics pipeline and open-source software tools can be used to answer important prediction questions while limiting potential causes of bias (e.g., by validating phenotypes, specifying the target population, performing large-scale external validation, and publicly providing all analytical source code). Methods: We show step-by-step how to implement the analytics pipeline for the question: ‘In patients hospitalized with COVID-19, what is the risk of death 0 to 30 days after hospitalization?’. We develop models using six different machine learning methods in a USA claims database containing over 20,000 COVID-19 hospitalizations and externally validate the models using data containing over 45,000 COVID-19 hospitalizations from South Korea, Spain, and the USA. Results: Our open-source software tools enabled us to efficiently go end-to-end from problem design to reliable Model Development and evaluation. When predicting death in patients hospitalized with COVID-19, AdaBoost, random forest, gradient boosting machine, and decision tree yielded similar or lower internal and external validation discrimination performance compared to L1-regularized logistic regression, whereas the MLP neural network consistently resulted in lower discrimination. L1-regularized logistic regression models were well calibrated. Conclusion: Our results show that following the OHDSI analytics pipeline for patient-level prediction modelling can enable the rapid development towards reliable prediction models. The OHDSI software tools and pipeline are open source and available to researchers from all around the world.</p

    Modeling the workflow of one primary care physician-nurse team.

    Get PDF
    Primary care has been identified as a vital part of the healthcare system in the U.S., and one that operates in a challenging, unique environment. Primary care sees a wide variety of patients and is undergoing a series of major transformations simultaneously. As a result, primary care would greatly benefit from a systemic approach to the analysis of its workflows. Discrete-event simulation has been identified as a good tool to evaluate complex healthcare systems. The existing primary care DES models focus on the physician. Also, those models are limited in (a) their usefulness to produce generic models that can easily and quickly be customized and (b) the analysis of the specific tasks performed to treat a patient. Hence, a research idea was developed to address these limitations, which led to a progressive multi-part study developing the necessary components to model a primary clinic. The study was constructed to allow each progressive study to build on the previous. The first part of the study developed a new approach to address those limitations: modeling a primary care clinic from the viewpoint that the physician is the entity that moves through the system. This approach was implemented based on observational data and a standardized primary care physician task list using ARENA© simulation software. The completed model is evidence-based, with the simulation producing predictions and analysis associated with a given patient visit that has not happened by mimicking reality. The benefits of this type of flexible model are that it allows for analysis of any type of “cost” that can be quantified, and it can then be utilized for predicting and potentially subsequently reducing procedural errors and variation in order to increase operational efficiency. The second part of the study was to develop a standardized primary care nurse task list, which is needed given the current transformation of primary care from a doctor-based model to a team-based model. A comprehensive, validated list of tasks occurring during clinic visits was complied from a secondary data analysis. For this, primary care clinics in Wisconsin were selected from a pre-existing study based on 100% participation of the physician-nurse teams. The final task list had 18 major tasks and 174 second-level subtasks, with 103 additional third-level tasks. This task list, combined with the primary care physician task list, provides a tool set that facilitates clinics’ analysis of the workflow associated with a complete patient encounter. Finally, the third part of the study used observational data, the standardized primary care nurse task list, and a similar modeling methodology to the first part to develop a simulation model of the primary care nurse. The model was implemented using ARENA© simulation software. This model is flexible, resulting in an easily-customizable model, and robust in that it allows the analysis of any type of “cost” that can be quantified, such as time, physical or mental resources, money, et cetera. This can potentially be used to predict, and reduce, procedural errors and variation in response to changes to the workflows or environment; hence, the operational efficiency and medical accuracy can be more accurately evaluated

    Creating Fair Models of Atherosclerotic Cardiovascular Disease Risk

    Get PDF
    Guidelines for the management of atherosclerotic cardiovascular disease (ASCVD) recommend the use of risk stratification models to identify patients most likely to benefit from cholesterol-lowering and other therapies. These models have differential performance across race and gender groups with inconsistent behavior across studies, potentially resulting in an inequitable distribution of beneficial therapy. In this work, we leverage adversarial learning and a large observational cohort extracted from electronic health records (EHRs) to develop a "fair" ASCVD risk prediction model with reduced variability in error rates across groups. We empirically demonstrate that our approach is capable of aligning the distribution of risk predictions conditioned on the outcome across several groups simultaneously for models built from high-dimensional EHR data. We also discuss the relevance of these results in the context of the empirical trade-off between fairness and model performance
    • 

    corecore