15 research outputs found

    Predicting in-hospital mortality for stroke patients: results differ across severity-measurement methods

    No full text
    OBJECTIVE: To see whether severity-adjusted predictions of likelihoods of in-hospital death for stroke patients differed among severity measures. METHODS: The study sample was 9,407 stroke patients from 94 hospitals, with 916 (9.7%) in-hospital deaths. Probability of death was calculated for each patient using logistic regression with age-sex and each of five severity measures as the independent variables: admission MedisGroups probability-of-death scores; scores based on 17 physiologic variables on admission; Disease Staging\u27s probability-of-mortality model; the Seventy Score of Patient Management Categories (PMCs); and the All Patient-Refined Diagnosis Groups (APR-DRGs). For each patient, the odds of death predicted by the severity measures were compared. The frequencies of seven clinical indicators of poor prognosis in stroke were examined for patients with very different odds of death predicted by different severity measures. Odds ratios were considered very different when the odds of death predicted by one severity measure was less than 0.5 or greater than 2.0 of that predicted by a second measure. RESULTS: MedisGroups and the physiology scores predicted similar odds of death for 82.2% of the patients. MedisGroups and PMCs disagreed the most, with very different odds predicted for 61.6% of patients. Patients viewed as more severely III by MedisGroups and the physiology score were more likely to have the clinical stroke findings than were patients seen as sicker by the other severity measures. This suggests that MedisGroups and the physiology score are more clinically credible. CONCLUSIONS: Some pairs of severity measures ranked over 60% of patients very differently by predicted probability of death. Studies of severity-adjusted stroke outcomes may produce different results depending on which severity measure is used for risk adjustment

    Does severity explain differences in hospital length of stay for pneumonia patients

    No full text
    OBJECTIVES: In the USA, the role of patient severity in determining hospital resource use has been questioned since Medicare adopted prospective hospital payment based on diagnosis-related groups (DRGs). Exactly how to measure severity, however, remains unclear. We examined whether assessments of severity-adjusted hospital lengths of stay (LOS) varied when different measures were used for severity adjustment. METHODS: The complete study sample included 18,016 patients receiving medical treatment for pneumonia at 105 acute care hospitals. We studied 11 severity measures, nine based on patient demographic and diagnosis and procedure code information and two derived from clinical findings from the medical record. For each severity measure, LOS was regressed on patient age, sex, DRG, and severity score. Analyses were performed on trimmed and untrimmed data. Trimming eliminated cases with LOS more than three standard deviations from the mean on a log scale. RESULTS: The trimmed data set contained 17,976 admissions with a mean (S.D.) LOS of 8.9 (6.1) days. Average LOS ranged from 5.0-11.8 days among the 105 hospitals. Using trimmed data, the 11 severity measures produced R-squared values ranging from 0.098-0.169 for explaining LOS for individual patients. Across all severity measures, predicted average hospital LOS varied much less than the observed LOS, with predicted mean hospital LOS ranging from about 8.4-9.8 days. DISCUSSION: No severity measure explained the two-fold differences among hospitals in average LOS. Other patient characteristics, practice patterns, or institutional factors may cause the wide differences across hospitals in LOS

    Do severity measures explain differences in length of hospital stay? The case of hip fracture

    No full text
    DATA SOURCES/STUDY SETTING: Data on admissions to 80 hospitals nationwide in the 1992 MedisGroups Comparative Database. STUDY DESIGN: For each of 14 severity measures, LOS was regressed on patient age/sex, DRG, and severity score. Regressions were performed on trimmed and untrimmed data. R-squared was used to evaluate model performance. For each severity measure for each hospital, we calculated the expected LOS and the z-score, a measure of the deviation of observed from expected LOS. We ranked hospitals by z-scores. DATA EXTRACTION: All patients admitted for initial surgical repair of a hip fracture, defined by DRG, diagnosis, and procedure codes. PRINCIPAL FINDINGS: The 5,664 patients had a mean (s.d.) LOS of 11.9 (8.9) days. Cross-validated R-squared values from the multivariable regressions (trimmed data) ranged from 0.041 (Comorbidity Index) to 0.165 (APR-DRGs). Using untrimmed data, observed average LOS for hospitals ranged from 7.6 to 23.9 days. The 14 severity measures showed excellent agreement in ranking hospitals based on z-scores. No severity measure explained the differences between hospitals with the shortest and longest LOS. CONCLUSIONS: Hospitals differed widely in their mean LOS for hip fracture patients, and severity adjustment did little to explain these differences

    Differences in procedure use, in-hospital mortality, and illness severity by gender for acute myocardial infarction patients: are answers affected by data source and severity measure

    No full text
    OBJECTIVES: According to some studies, women with heart disease receive fewer procedures and have higher in-hospital death rates than men. These studies vary by data source (hospital discharge abstract versus detailed clinical information) and severity measurement methods. The authors examined whether evaluations of gender differences for acute myocardial infarction patients vary by data source and severity measure. METHODS: The authors considered 10 severity measures: four using clinical medical record data and six using discharge abstracts (diagnosis and procedure codes). The authors studied all 14,083 patients admitted in 1991 for acute myocardial infarction to 100 hospitals nationwide, examining in-hospital death and use of coronary angiography, coronary artery bypass graft surgery (CABG), and percutaneous transluminal coronary angioplasty (PTCA). Logistic regression was used to calculate odds ratios for death and procedure use for women compared with men, controlling for age and each of the severity scores. RESULTS: After adjusting only for age, women were significantly more likely than men to die and less likely to receive CABG and coronary angiography. Severity measures provided different assessments of whether women were sicker than men; for all cases, clinical data-based MedisGroups rated women\u27s severity compared with men\u27s, whereas four code-based severity measures viewed women as sicker. After adjusting for severity and age, women were significantly more likely than men to die in-hospital and less likely to receive coronary angiography and CABG; women and men had relatively equal adjusted odds ratios of receiving PTCA. Odds ratios reflecting gender differences in procedure use and death rates were similar across severity measures. CONCLUSIONS: Comparisons of severity-adjusted in-hospital death rates and invasive procedure use between men and women yielded similar findings regardless of data source and severity measure

    Using severity measures to predict the likelihood of death for pneumonia inpatients

    No full text
    OBJECTIVE: To see whether predictions of patients, likelihood of dying in-hospital differed among severity methods. DESIGN: Retrospective cohort. PATIENTS: 18,016 persons 18 years of age and older managed medically for pneumonia; 1,732 (9.6%) in-hospital deaths. METHODS: Probability of death was calculated for each patient using logistic regression with age, age squared, sex, and each of five severity measures as the independent variables: 1) admission MedisGroups probability of death scores; 2) scores based on 17 admission physiologic variables; 3) Disease Staging\u27s probability of mortality model; the Severity Score of Patient Management Categories (PMCs); 4) and the All Patient Refined Diagnosis-Related Groups (APR-DRGs). Patients were ranked by calculated probability of death; 5) rankings were compared across severity methods. Frequencies of 14 clinical findings considered poor prognostic indicators in pneumonia were examined for patients ranked differently by different methods. RESULTS: MedisGroups and the physiology score predicted a similar likelihood of death for 89.2% of patients. In contrast, the three code-based severity methods rated over 25% of patients differently by predicted likelihood of death when compared with the rankings of the two clinical data-based methods [MedisGroups and the physiology score]. MedisGroups and the physiology score demonstrated better clinical credibility than the three severity methods based on discharge abstract data. CONCLUSIONS: Some pairs of severity measures ranked over 25% of patients very differently by predicted probability of death. Results of outcomes studies may vary depending on which severity method is used for risk adjustment

    Predicting in-hospital deaths from coronary artery bypass graft surgery. Do different severity measures give different predictions

    No full text
    OBJECTIVES: Severity-adjusted death rates for coronary artery bypass graft (CABG) surgery by provider are published throughout the country. Whether five severity measures rated severity differently for identical patients was examined in this study. METHODS: Two severity measures rate patients using clinical data taken from the first two hospital days (MedisGroups, physiology scores); three use diagnoses and other information coded on standard, computerized hospital discharge abstracts (Disease Staging, Patient Management Categories, all patient refined diagnosis related groups). The database contained 7,764 coronary artery bypass graft patients from 38 hospitals with 3.2% in-hospital deaths. Logistic regression was performed to predict deaths from age, age squared, sex, and severity scores, and c statistics from these regressions were used to indicate model discrimination. Odds ratios of death predicted by different severity measures were compared. RESULTS: Code-based measures had better c statistics than clinical measures: all patient refined diagnosis related groups, c = 0.83 (95% C.I. 0.81, 0.86) versus MedisGroups, c = 0.73 (95% C.I. 0.70, 0.76). Code-based measures predicted very different odds of dying than clinical measures for more than 30% of patients. Diagnosis codes indicting postoperative, life-threatening conditions may contribute to the superior predictive power of code-based measures. CONCLUSIONS: Clinical and code-based severity measures predicted different odds of dying for many coronary artery bypass graft patients. Although code-based measures had better statistical performance, this may reflect their reliance on diagnosis codes for life-threatening conditions occurring late in the hospitalization, possibly as complications of care. This compromises their utility for drawing inferences about quality of care based on severity-adjusted coronary artery bypass graft death rates

    Risk adjustment methods can affect perceptions of outcomes

    No full text
    When comparing outcomes of medical care, it is essential to adjust for patient risk, including severity of illness. A variety of severity measures exist, but perceptions of outcomes may differ depending on how severity is defined. We used two severity-adjustment approaches to demonstrate that comparisons of outcomes across subgroups of patients can vary dramatically depending on how severity is assessed. We studied two approaches: model 1 was the admission MedisGroups score; model 2 was computed from age and 12 chronic conditions defined by diagnosis codes. Although common summary measures of model performance (R-squared and C) both suggested that model 1 is a better predictor of in-hospital death than model 2, the weaker model consistently produced more accurate expectations by payer class and age group. Using model 1 for severity adjustment suggested that Medicare patients did substantially worse than expected and Medicaid patients substantially better. In contrast, use of model 2 found Medicare patients doing as expected, but Medicaid patients faring poorly

    Using severity-adjusted stroke mortality rates to judge hospitals

    No full text
    Mortality rates are commonly used to judge hospital performance. In comparing death rates across hospitals, it is important to control for differences in patient severity. Various severity tools are now actively marketed in the United States. This study asked whether one would identify different hospitals as having higher- or lower-than-expected death rates using different severity measures. We applied 11 widely-used severity measures to the same database containing 9407 medically-treated stroke patients from 94 hospitals, with 916 (9.7%) in-hospital deaths. Unadjusted hospital mortality rates ranged from 0 to 24.4%. For 27 hospitals, observed mortality rates differed significantly from expected rates when judged by one or more, but not all 11, severity methods. The agreement between pairs of severity methods for identifying the worst 10% or best 50% of hospitals was fair to good. Efforts to evaluate hospital performance based on severity-adjusted, in-hospital death rates for stroke patients are likely to be sensitive to how severity is measured

    Predicting who dies depends on how severity is measured: implications for evaluating patient outcomes

    No full text
    OBJECTIVE: To determine whether assessments of illness severity, defined as risk for in-hospital death, varied across four severity measures. DESIGN: Retrospective cohort study. SETTING: 100 hospitals using the MedisGroups severity measure. PATIENTS: 11 880 adults managed medically for acute myocardial infarction; 1574 in-hospital deaths (13.2%). MEASUREMENTS: For each patient, probability of death was predicted four times, each time by using patient age and sex and one of four common severity measures: 1) admission MedisGroups scores for probability of death scores; 2) scores based on values for 17 physiologic variables at time of admission; 3) Disease Staging\u27s probability-of-mortality model; and 4) All Patient Refined Diagnosis Related Groups (APR-DRGs). Patients were ranked according to probability of death as predicted by each severity measure, and rankings were compared across measures. The presence or absence of each of six clinical findings considered to indicate poor prognosis in patients with myocardial infarction (congestive heart failure, pulmonary edema, coma, low systolic blood pressure, low left ventricular ejection fraction, and high blood urea nitrogen level) was determined for patients ranked differently by different severity measures. RESULTS: MedisGroups and the physiology score gave 94.7% of patients similar rankings. Disease Staging, MedisGroups, and the physiology score gave only 78% of patients similar rankings. MedisGroups and APR-DRGs gave 80% of patients similar rankings. Patients whose illnesses were more severe according to MedisGroups and the physiology score were more likely to have the six clinical findings than were patients whose illnesses were more severe according to Disease Staging and APR-DRGs. CONCLUSIONS: Some pairs of severity measures assigned very different severity levels to more than 20% of patients. Evaluations of patient outcomes need to be sensitive to the severity measures used for risk adjustment
    corecore