120 research outputs found

    A comparison between the APACHE II and Charlson Index Score for predicting hospital mortality in critically ill patients

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Risk adjustment and mortality prediction in studies of critical care are usually performed using acuity of illness scores, such as Acute Physiology and Chronic Health Evaluation II (APACHE II), which emphasize physiological derangement. Common risk adjustment systems used in administrative datasets, like the Charlson index, are entirely based on the presence of co-morbid illnesses. The purpose of this study was to compare the discriminative ability of the Charlson index to the APACHE II in predicting hospital mortality in adult multisystem ICU patients.</p> <p>Methods</p> <p>This was a population-based cohort design. The study sample consisted of adult (>17 years of age) residents of the Calgary Health Region admitted to a multisystem ICU between April 2002 and March 2004. Clinical data were collected prospectively and linked to hospital outcome data. Multiple regression analyses were used to compare the performance of APACHE II and the Charlson index.</p> <p>Results</p> <p>The Charlson index was a poor predictor of mortality (C = 0.626). There was minimal difference between a baseline model containing age, sex and acute physiology score (C = 0.74) and models containing either chronic health points (C = 0.76) or Charlson index variations (C = 0.75, 0.76, 0.77). No important improvement in prediction occurred when the Charlson index was added to the full APACHE II model (C = 0.808 to C = 0.813).</p> <p>Conclusion</p> <p>The Charlson index does not perform as well as the APACHE II in predicting hospital mortality in ICU patients. However, when acuity of illness scores are unavailable or are not recorded in a standard way, the Charlson index might be considered as an alternative method of risk adjustment and therefore facilitate comparisons between intensive care units.</p

    Estimating Long-Term Survival of Critically Ill Patients: The PREDICT Model

    Get PDF
    BACKGROUND: Long-term survival outcome of critically ill patients is important in assessing effectiveness of new treatments and making treatment decisions. We developed a prognostic model for estimation of long-term survival of critically ill patients. METHODOLOGY AND PRINCIPAL FINDINGS: This was a retrospective linked data cohort study involving 11,930 critically ill patients who survived more than 5 days in a university teaching hospital in Western Australia. Older age, male gender, co-morbidities, severe acute illness as measured by Acute Physiology and Chronic Health Evaluation II predicted mortality, and more days of vasopressor or inotropic support, mechanical ventilation, and hemofiltration within the first 5 days of intensive care unit admission were associated with a worse long-term survival up to 15 years after the onset of critical illness. Among these seven pre-selected predictors, age (explained 50% of the variability of the model, hazard ratio [HR] between 80 and 60 years old = 1.95) and co-morbidity (explained 27% of the variability, HR between Charlson co-morbidity index 5 and 0 = 2.15) were the most important determinants. A nomogram based on the pre-selected predictors is provided to allow estimation of the median survival time and also the 1-year, 3-year, 5-year, 10-year, and 15-year survival probabilities for a patient. The discrimination (adjusted c-index = 0.757, 95% confidence interval 0.745-0.769) and calibration of this prognostic model were acceptable. SIGNIFICANCE: Age, gender, co-morbidities, severity of acute illness, and the intensity and duration of intensive care therapy can be used to estimate long-term survival of critically ill patients. Age and co-morbidity are the most important determinants of long-term prognosis of critically ill patients

    Characteristics of Nondisabled Older Patients Developing New Disability Associated with Medical Illnesses and Hospitalization

    Get PDF
    OBJECTIVE: To identify demographic, clinical, and biological characteristics of older nondisabled patients who develop new disability in basic activities of daily living (BADL) during medical illnesses requiring hospitalization. DESIGN: Longitudinal observational study. SETTING: Geriatric and Internal Medicine acute care units. PARTICIPANTS: Data are from 1,686 patients aged 65 and older who independent in BADL 2 weeks before hospital admission, enrolled in the 1998 survey of the Italian Group of Pharmacoepidemiology in the Elderly Study. MEASUREMENTS: Study outcome was new BADL disability at time of hospital discharge. Sociodemographic, functional status, and clinical characteristics were collected at hospital admission; acute and chronic conditions were classified according to the International Classification of Disease, ninth revision; fasting blood samples were obtained and processed with standard methods. RESULTS: At the time of hospital discharge 113 patients (6.7%) presented new BADL disability. Functional decline was strongly related to patients’ age and preadmission instrumental activities of daily living status. In a multivariate analysis, older age, nursing home residency, low body mass index, elevated erythrocyte sedimentation rate, acute stroke, high level of comorbidity expressed as Cumulative Illness Rating Scale score, polypharmacotherapy, cognitive decline, and history of fall in the previous year were independent and significant predictors of BADL disability. CONCLUSION: Several factors might contribute to loss of physical independence in hospitalized older persons. Preexisting conditions associated with the frailty syndrome, including physical and cognitive function, comorbidity, body composition, and inflammatory markers, characterize patients at high risk of functional decline

    Long-term effects of an inpatient weight-loss program in obese children and the role of genetic predisposition-rationale and design of the LOGIC-trial

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The prevalence of childhood obesity has increased worldwide, which is a serious concern as obesity is associated with many negative immediate and long-term health consequences. Therefore, the treatment of overweight and obesity in children and adolescents is strongly recommended. Inpatient weight-loss programs have shown to be effective particularly regarding short-term weight-loss, whilst little is known both on the long-term effects of this treatment and the determinants of successful weight-loss and subsequent weight maintenance.</p> <p>The purpose of this study is to evaluate the short, middle and long-term effects of an inpatient weight-loss program for children and adolescents and to investigate the likely determinants of weight changes, whereby the primary focus lies on the potential role of differences in polymorphisms of adiposity-relevant genes.</p> <p>Methods/Design</p> <p>The study involves overweight and obese children and adolescents aged 6 to 19 years, who participate in an inpatient weight-loss program for 4 to 6 weeks. It started in 2006 and it is planned to include 1,500 participants by 2013. The intervention focuses on diet, physical activity and behavior therapy. Measurements are taken at the start and the end of the intervention and comprise blood analyses (DNA, lipid and glucose metabolism, adipokines and inflammatory markers), anthropometry (body weight, height and waist circumference), blood pressure, pubertal stage, and exercise capacity. Physical activity, dietary habits, quality of life, and family background are assessed by questionnaires. Follow-up assessments are performed 6 months, 1, 2, 5 and 10 years after the intervention: Children will complete the same questionnaires at all time points and visit their general practitioner for examination of anthropometric parameters, blood pressure and assessment of pubertal stage. At the 5 and 10 year follow-ups, blood parameters and exercise capacity will be additionally measured.</p> <p>Discussion</p> <p>Apart from illustrating the short, middle and long-term effects of an inpatient weight-loss program, this study will contribute to a better understanding of inter-individual differences in the regulation of body weight, taking into account the role of genetic predisposition and lifestyle factors.</p> <p>Trial Registration</p> <p><a href="http://www.clinicaltrials.gov/ct2/show/NCT01067157">NCT01067157</a>.</p

    The Prevalence and Cost of Unapproved Uses of Top-Selling Orphan Drugs

    Get PDF
    Introduction: The Orphan Drug Act encourages drug development for rare conditions. However, some orphan drugs become top sellers for unclear reasons. We sought to evaluate the extent and cost of approved and unapproved uses of orphan drugs with the highest unit sales. Methods We assessed prescription patterns for four top-selling orphan drugs: lidocaine patch (Lidoderm) approved for post-herpetic neuralgia, modafinil (Provigil) approved for narcolepsy, cinacalcet (Sensipar) approved for hypercalcemia of parathyroid carcinoma, and imatinib (Gleevec) approved for chronic myelogenous leukemia and gastrointestinal stromal tumor. We pooled patient-specific diagnosis and prescription data from two large US state pharmaceutical benefit programs for the elderly. We analyzed the number of new and total patients using each drug and patterns of reimbursement for approved and unapproved uses. For lidocaine patch, we subcategorized approved prescriptions into two subtypes of unapproved uses: neuropathic pain, for which some evidence of efficacy exists, and non-neuropathic pain. Results: We found that prescriptions for lidocaine patch, modafinil, and cinacalcet associated with non-orphan diagnoses rose at substantially higher rates (average monthly increases in number of patients of 14.6, 1.45, and 1.58) than prescriptions associated with their orphan diagnoses (3.12, 0.24, and 0.03, respectively (p75%). Increases in lidocaine patch use for non-neuropathic pain far exceeded neuropathic pain (10.2 vs. 3.6 patients, p<0.001). Discussion In our sample, three of four top-selling orphan drugs were used more commonly for non-orphan indications. These orphan drugs treated common clinical symptoms (pain and fatigue) or laboratory abnormalities. We should continue to monitor orphan drug use after approval to identify products that come to be widely used for non-FDA approved indications, particularly those without adequate evidence of efficacy

    A predictive model for the early identification of patients at risk for a prolonged intensive care unit length of stay

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Patients with a prolonged intensive care unit (ICU) length of stay account for a disproportionate amount of resource use. Early identification of patients at risk for a prolonged length of stay can lead to quality enhancements that reduce ICU stay. This study developed and validated a model that identifies patients at risk for a prolonged ICU stay.</p> <p>Methods</p> <p>We performed a retrospective cohort study of 343,555 admissions to 83 ICUs in 31 U.S. hospitals from 2002-2007. We examined the distribution of ICU length of stay to identify a threshold where clinicians might be concerned about a prolonged stay; this resulted in choosing a 5-day cut-point. From patients remaining in the ICU on day 5 we developed a multivariable regression model that predicted remaining ICU stay. Predictor variables included information gathered at admission, day 1, and ICU day 5. Data from 12,640 admissions during 2002-2005 were used to develop the model, and the remaining 12,904 admissions to internally validate the model. Finally, we used data on 11,903 admissions during 2006-2007 to externally validate the model.</p> <p>Results</p> <p>The variables that had the greatest impact on remaining ICU length of stay were those measured on day 5, not at admission or during day 1. Mechanical ventilation, PaO<sub>2</sub>: FiO<sub>2 </sub>ratio, other physiologic components, and sedation on day 5 accounted for 81.6% of the variation in predicted remaining ICU stay. In the external validation set observed ICU stay was 11.99 days and predicted total ICU stay (5 days + day 5 predicted remaining stay) was 11.62 days, a difference of 8.7 hours. For the same patients, the difference between mean observed and mean predicted ICU stay using the APACHE day 1 model was 149.3 hours. The new model's r<sup>2 </sup>was 20.2% across individuals and 44.3% across units.</p> <p>Conclusions</p> <p>A model that uses patient data from ICU days 1 and 5 accurately predicts a prolonged ICU stay. These predictions are more accurate than those based on ICU day 1 data alone. The model can be used to benchmark ICU performance and to alert physicians to explore care alternatives aimed at reducing ICU stay.</p

    Differential neuromuscular training effects onACL injury risk factors in"high-risk" versus "low-risk" athletes

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Neuromuscular training may reduce risk factors that contribute to ACL injury incidence in female athletes. Multi-component, ACL injury prevention training programs can be time and labor intensive, which may ultimately limit training program utilization or compliance. The purpose of this study was to determine the effect of neuromuscular training on those classified as "high-risk" compared to those classified as "low-risk." The hypothesis was that high-risk athletes would decrease knee abduction moments while low-risk and control athletes would not show measurable changes.</p> <p>Methods</p> <p>Eighteen high school female athletes participated in neuromuscular training 3×/week over a 7-week period. Knee kinematics and kinetics were measured during a drop vertical jump (DVJ) test at pre/post training. External knee abduction moments were calculated using inverse dynamics. Logistic regression indicated maximal sensitivity and specificity for prediction of ACL injury risk using external knee abduction (25.25 Nm cutoff) during a DVJ. Based on these data, 12 study subjects (and 4 controls) were grouped into the high-risk (knee abduction moment >25.25 Nm) and 6 subjects (and 7 controls) were grouped into the low-risk (knee abduction <25.25 Nm) categories using mean right and left leg knee abduction moments. A mixed design repeated measures ANOVA was used to determine differences between athletes categorized as high or low-risk.</p> <p>Results</p> <p>Athletes classified as high-risk decreased their knee abduction moments by 13% following training (Dominant pre: 39.9 ± 15.8 Nm to 34.6 ± 9.6 Nm; Non-dominant pre: 37.1 ± 9.2 to 32.4 ± 10.7 Nm; p = 0.033 training X risk factor interaction). Athletes grouped into the low-risk category did not change their abduction moments following training (p > 0.05). Control subjects classified as either high or low-risk also did not significantly change from pre to post-testing.</p> <p>Conclusion</p> <p>These results indicate that "high-risk" female athletes decreased the magnitude of the previously identified risk factor to ACL injury following neuromuscular training. However, the mean values for the high-risk subjects were not reduced to levels similar to low-risk group following training. Targeting female athletes who demonstrate high-risk knee abduction loads during dynamic tasks may improve efficacy of neuromuscular training. Yet, increased training volume or more specific techniques may be necessary for high-risk athletes to substantially decrease ACL injury risk.</p

    A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery – Part I: model planning

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Different methods have recently been proposed for predicting morbidity in intensive care units (ICU). The aim of the present study was to critically review a number of approaches for developing models capable of estimating the probability of morbidity in ICU after heart surgery. The study is divided into two parts. In this first part, popular models used to estimate the probability of class membership are grouped into distinct categories according to their underlying mathematical principles. Modelling techniques and intrinsic strengths and weaknesses of each model are analysed and discussed from a theoretical point of view, in consideration of clinical applications.</p> <p>Methods</p> <p>Models based on Bayes rule, <it>k-</it>nearest neighbour algorithm, logistic regression, scoring systems and artificial neural networks are investigated. Key issues for model design are described. The mathematical treatment of some aspects of model structure is also included for readers interested in developing models, though a full understanding of mathematical relationships is not necessary if the reader is only interested in perceiving the practical meaning of model assumptions, weaknesses and strengths from a user point of view.</p> <p>Results</p> <p>Scoring systems are very attractive due to their simplicity of use, although this may undermine their predictive capacity. Logistic regression models are trustworthy tools, although they suffer from the principal limitations of most regression procedures. Bayesian models seem to be a good compromise between complexity and predictive performance, but model recalibration is generally necessary. <it>k</it>-nearest neighbour may be a valid non parametric technique, though computational cost and the need for large data storage are major weaknesses of this approach. Artificial neural networks have intrinsic advantages with respect to common statistical models, though the training process may be problematical.</p> <p>Conclusion</p> <p>Knowledge of model assumptions and the theoretical strengths and weaknesses of different approaches are fundamental for designing models for estimating the probability of morbidity after heart surgery. However, a rational choice also requires evaluation and comparison of actual performances of locally-developed competitive models in the clinical scenario to obtain satisfactory agreement between local needs and model response. In the second part of this study the above predictive models will therefore be tested on real data acquired in a specialized ICU.</p

    Quality of life in patients treated with first-line antiretroviral therapy containing nevirapine or efavirenz in Uganda: A prospective non-randomized study

    Get PDF
    © 2015 Mwesigire et al. Background: The goal of antiretroviral therapy (ART) is to suppress viral replication, reduce morbidity and mortality, and improve quality of life (QoL). For resource-limited settings, the World Health Organization recommends a first-line regimen of two-nucleoside reverse-transcriptase inhibitors and one non-nucleoside transcriptase inhibitor (nevirapine (NVP) or efavirenz (EFV)). There are few data comparing the QoL impact of NVP versus EFV. This study assessed the change in QoL and factors associated with QoL among HIV patients receiving ART regimens based on EFV or NVP. Methods: We enrolled 640 people with HIV eligible for ART who received regimens including either NVP or EFV. QoL was assessed at baseline, three months and six months using Physical Health Summary (PHS) and Mental Health Summary (MHS) scores and the Global Person Generated Index (GPGI). Data were analyzed using generalized estimating equations, with ART regimen as the primary exposure, to identify associations between patient and disease factors and QoL. Results: QoL increased on ART. The mean QoL scores did not differ significantly for regimens based on NVP versus EFV during follow-up for MHS and GPGI regardless of CD4 stratum and for PHS among patients with a CD4 count >250 cells/μL. The PHS-adjusted β coefficients for ART regimens based on EFV versus NVP by CD4 count strata were as follows: -1.61 (95 % CI -2.74, -0.49) for CD4 count 250 cells/μL. The corresponding MHS-adjusted β coefficients were as follows: -0.39 (-1.40, 0.62) for CD4∈250 cells/μL. The GPGI-adjusted odds ratios for EFV versus NVP were 0.51 (0.25, 1.04) for CD4 count ∈250 cells/μL. QoL improved among patients on EFV over the 6-month follow-up period (MHS p
    corecore