10 research outputs found
Evaluating Primary Care Physician Performance in Diabetes Glucose Control
This study demonstrates that it is possible to identify primary care physicians (PCPs) who perform better or worse than expected in managing diabetes. Study subjects were 14 033 adult diabetics and their 133 PCPs. Logistic regression was used to predict the odds that a patient would have uncontrolled diabetes (defined as HbA1c ≥8%) based on patient-level characteristics alone. A second model predicted diabetes control from physician-level identity and characteristics alone. A third model combined the patient- and physician-level models using hierarchical logistic regression. Physician performance is calculated from the difference between the expected and observed proportions of patients with uncontrolled diabetes. After adjusting for important patient characteristics, PCPs were identified who performed better or worse than expected in managing diabetes. This strategy can be used to characterize physician performance in other chronic conditions. This approach may lead to new insights regarding effective and ineffective treatment strategies
Recommended from our members
An operationally implementable model for predicting the effects of an infectious disease on a comprehensive regional healthcare system
An operationally implementable predictive model has been developed to forecast the number of COVID-19 infections in the patient population, hospital floor and ICU censuses, ventilator and related supply chain demand. The model is intended for clinical, operational, financial and supply chain leaders and executives of a comprehensive healthcare system responsible for making decisions that depend on epidemiological contingencies. This paper describes the model that was implemented at NorthShore University HealthSystem and is applicable to any communicable disease whose risk of reinfection for the duration of the pandemic is negligible.</p
Recommended from our members
Clinical Analytics Prediction Engine (CAPE): Development, electronic health record integration and prospective validation of hospital mortality, 180-day mortality and 30-day readmission risk prediction models
Background: Numerous predictive models in the literature stratify patients by risk of mortality and readmission. Few prediction models have been developed to optimize impact while sustaining sufficient performance. Objective: We aimed to derive models for hospital mortality, 180-day mortality and 30-day readmission, implement these models within our electronic health record and prospectively validate these models for use across an entire health system. Materials & methods: We developed, integrated into our electronic health record and prospectively validated three predictive models using logistic regression from data collected from patients 18 to 99 years old who had an inpatient or observation admission at NorthShore University HealthSystem, a four-hospital integrated system in the United States, from January 2012 to September 2018. We analyzed the area under the receiver operating characteristic curve (AUC) for model performance. Results: Models were derived and validated at three time points: retrospective, prospective at discharge, and prospective at 4 hours after presentation. AUCs of hospital mortality were 0.91, 0.89 and 0.77, respectively. AUCs for 30-day readmission were 0.71, 0.71 and 0.69, respectively. 180-day mortality models were only retrospectively validated with an AUC of 0.85 Discussion: We were able to retain good model performance while optimizing potential model impact by also valuing model derivation efficiency, usability, sensitivity, generalizability and ability to prescribe timely interventions to reduce underlying risk. Measuring model impact by tying prediction models to interventions that are then rapidly tested will establish a path for meaningful clinical improvement and implementation.</p
Clinical Analytics Prediction Engine (CAPE): Development, electronic health record integration and prospective validation of hospital mortality, 180-day mortality and 30-day readmission risk prediction models.
BackgroundNumerous predictive models in the literature stratify patients by risk of mortality and readmission. Few prediction models have been developed to optimize impact while sustaining sufficient performance.ObjectiveWe aimed to derive models for hospital mortality, 180-day mortality and 30-day readmission, implement these models within our electronic health record and prospectively validate these models for use across an entire health system.Materials & methodsWe developed, integrated into our electronic health record and prospectively validated three predictive models using logistic regression from data collected from patients 18 to 99 years old who had an inpatient or observation admission at NorthShore University HealthSystem, a four-hospital integrated system in the United States, from January 2012 to September 2018. We analyzed the area under the receiver operating characteristic curve (AUC) for model performance.ResultsModels were derived and validated at three time points: retrospective, prospective at discharge, and prospective at 4 hours after presentation. AUCs of hospital mortality were 0.91, 0.89 and 0.77, respectively. AUCs for 30-day readmission were 0.71, 0.71 and 0.69, respectively. 180-day mortality models were only retrospectively validated with an AUC of 0.85.DiscussionWe were able to retain good model performance while optimizing potential model impact by also valuing model derivation efficiency, usability, sensitivity, generalizability and ability to prescribe timely interventions to reduce underlying risk. Measuring model impact by tying prediction models to interventions that are then rapidly tested will establish a path for meaningful clinical improvement and implementation