667 research outputs found
Does adding risk-trends to survival models improve in-hospital mortality predictions? A cohort study
<p>Abstract</p> <p>Background</p> <p>Clinicians informally assess changes in patients' status over time to prognosticate their outcomes. The incorporation of trends in patient status into regression models could improve their ability to predict outcomes. In this study, we used a unique approach to measure trends in patient hospital death risk and determined whether the incorporation of these trend measures into a survival model improved the accuracy of its risk predictions.</p> <p>Methods</p> <p>We included all adult inpatient hospitalizations between 1 April 2004 and 31 March 2009 at our institution. We used the daily mortality risk scores from an existing time-dependent survival model to create five trend indicators: absolute and relative percent change in the risk score from the previous day; absolute and relative percent change in the risk score from the start of the trend; and number of days with a trend in the risk score. In the derivation set, we determined which trend indicators were associated with time to death in hospital, independent of the existing covariates. In the validation set, we compared the predictive performance of the existing model with and without the trend indicators.</p> <p>Results</p> <p>Three trend indicators were independently associated with time to hospital mortality: the absolute change in the risk score from the previous day; the absolute change in the risk score from the start of the trend; and the number of consecutive days with a trend in the risk score. However, adding these trend indicators to the existing model resulted in only small improvements in model discrimination and calibration.</p> <p>Conclusions</p> <p>We produced several indicators of trend in patient risk that were significantly associated with time to hospital death independent of the model used to create them. In other survival models, our approach of incorporating risk trends could be explored to improve their performance without the collection of additional data.</p
Performance of a multianalyte test as an aid for the diagnosis of ovarian cancer in symptomatic women
Background: Concomitant with the development of in vitro diagnostic multivariate index assays (IVDMIAs) to improve the diagnostic efficiency of ovarian cancer detection is the need to identify appropriate biostatistical approaches to assess improvements in risk predication. In this study, we assessed the utility of three different approaches for comparing diagnostic efficiency of an ovarian cancer multivariate assay in a retrospective case control phase 2 biomarker trial. The control cohort included both disease-free women and women with benign gynecological conditions to more accurately reflect the target population of symptomatic women
ASCORE: an up-to-date cardiovascular risk score for hypertensive patients reflecting contemporary clinical practice developed using the (ASCOT-BPLA) trial data.
A number of risk scores already exist to predict cardiovascular (CV) events. However, scores developed with data collected some time ago might not accurately predict the CV risk of contemporary hypertensive patients that benefit from more modern treatments and management. Using data from the randomised clinical trial Anglo-Scandinavian Cardiac Outcomes Trial-BPLA, with 15 955 hypertensive patients without previous CV disease receiving contemporary preventive CV management, we developed a new risk score predicting the 5-year risk of a first CV event (CV death, myocardial infarction or stroke). Cox proportional hazard models were used to develop a risk equation from baseline predictors. The final risk model (ASCORE) included age, sex, smoking, diabetes, previous blood pressure (BP) treatment, systolic BP, total cholesterol, high-density lipoprotein-cholesterol, fasting glucose and creatinine baseline variables. A simplified model (ASCORE-S) excluding laboratory variables was also derived. Both models showed very good internal validity. User-friendly integer score tables are reported for both models. Applying the latest Framingham risk score to our data significantly overpredicted the observed 5-year risk of the composite CV outcome. We conclude that risk scores derived using older databases (such as Framingham) may overestimate the CV risk of patients receiving current BP treatments; therefore, 'updated' risk scores are needed for current patients
PredictABEL: an R package for the assessment of risk prediction models
The rapid identification of genetic markers for multifactorial diseases from genome-wide association studies is fuelling interest in investigating the predictive ability and health care utility of genetic risk models. Various measures are available for the assessment of risk prediction models, each addressing a different aspect of performance and utility. We developed PredictABEL, a package in R that covers descriptive tables, measures and figures that are used in the analysis of risk prediction studies such as measures of model fit, predictive ability and clinical utility, and risk distributions, calibration plot and the receiver operating characteristic plot. Tables and figures are saved as separate files in a user-specified format, which include publication-quality EPS and TIFF formats. All figures are available in a ready-made layout, but they can be customized to the preferences of the user. The package has been developed for the analysis of genetic risk prediction studies, but can also be used for studies that only include non-genetic risk factors. PredictABEL is freely available at the websites of GenABEL (http://www.genabel.org) and CRAN (http://cran.r-project.org/)
The Procedural Index for Mortality Risk (PIMR): an index calculated using administrative data to quantify the independent influence of procedures on risk of hospital death
<p>Abstract</p> <p>Background</p> <p>Surgeries and other procedures can influence the risk of death in hospital. All published scales that predict post-operative death risk require clinical data and cannot be measured using administrative data alone. This study derived and internally validated an index that can be calculated using administrative data to quantify the independent risk of hospital death after a procedure.</p> <p>Methods</p> <p>For all patients admitted to a single academic centre between 2004 and 2009, we estimated the risk of all-cause death using the Kaiser Permanente Inpatient Risk Adjustment Methodology (KP-IRAM). We determined whether each patient underwent one of 503 commonly performed therapeutic procedures using Canadian Classification of Interventions codes and whether each procedure was emergent or elective. Multivariate logistic regression modeling was used to measure the association of each procedure-urgency combination with death in hospital independent of the KP-IRAM risk of death. The final model was modified into a scoring system to quantify the independent influence each procedure had on the risk of death in hospital.</p> <p>Results</p> <p>275 460 hospitalizations were included (137,730 derivation, 137,730 validation). In the derivation group, the median expected risk of death was 0.1% (IQR 0.01%-1.4%) with 4013 (2.9%) dying during the hospitalization. 56 distinct procedure-urgency combinations entered our final model resulting in a Procedural Index for Mortality Rating (PIMR) score values ranging from -7 to +11. In the validation group, the PIMR score significantly predicted the risk of death by itself (c-statistic 67.3%, 95% CI 66.6-68.0%) and when added to the KP-IRAM model (c-index improved significantly from 0.929 to 0.938).</p> <p>Conclusions</p> <p>We derived and internally validated an index that uses administrative data to quantify the independent association of a broad range of therapeutic procedures with risk of death in hospital. This scale will improve risk adjustment when administrative data are used for analyses.</p
A reference relative time-scale as an alternative to chronological age for cohorts with long follow-up
Background: Epidemiologists have debated the appropriate time-scale for cohort survival studies; chronological age or time-on-study being two such time-scales. Importantly, assessment of risk factors may depend on the choice of time-scale. Recently, chronological or attained age has gained support but a case can be made for a ‘reference relative time-scale’ as an alternative which circumvents difficulties that arise with this and other scales. The reference relative time of an individual participant is the integral of a reference population hazard function between time of entry and time of exit of the individual. The objective here is to describe the reference relative time-scale, illustrate its use, make comparison with attained age by simulation and explain its relationship to modern and traditional epidemiologic methods.
Results: A comparison was made between two models; a stratified Cox model with age as the time-scale versus an un-stratified Cox model using the reference relative time-scale. The illustrative comparison used a UK cohort of cotton workers, with differing ages at entry to the study, with accrual over a time period and with long follow-up. Additionally, exponential and Weibull models were fitted since the reference relative time-scale analysis need not be restricted to the Cox model. A simulation study showed that analysis using the reference relative time-scale and analysis using chronological age had very similar power to detect a significant risk factor and both were equally unbiased. Further, the analysis using the reference relative time-scale supported fully-parametric survival modelling and allowed percentile predictions and mortality curves to be constructed.
Conclusions: The reference relative time-scale was a viable alternative to chronological age, led to simplification of the modelling process and possessed the defined features of a good time-scale as defined in reliability theory. The reference relative time-scale has several interpretations and provides a unifying concept that links contemporary approaches in survival and reliability analysis to the traditional epidemiologic methods of Poisson regression and standardised mortality ratios. The community of practitioners has not previously made this connection
Chapter 12: Systematic Review of Prognostic Tests
A number of new biological markers are being studied as predictors of disease or adverse medical events among those who already have a disease. Systematic reviews of this growing literature can help determine whether the available evidence supports use of a new biomarker as a prognostic test that can more accurately place patients into different prognostic groups to improve treatment decisions and the accuracy of outcome predictions. Exemplary reviews of prognostic tests are not widely available, and the methods used to review diagnostic tests do not necessarily address the most important questions about prognostic tests that are used to predict the time-dependent likelihood of future patient outcomes. We provide suggestions for those interested in conducting systematic reviews of a prognostic test. The proposed use of the prognostic test should serve as the framework for a systematic review and to help define the key questions. The outcome probabilities or level of risk and other characteristics of prognostic groups are the most salient statistics for review and perhaps meta-analysis. Reclassification tables can help determine how a prognostic test affects the classification of patients into different prognostic groups, hence their treatment. Review of studies of the association between a potential prognostic test and patient outcomes would have little impact other than to determine whether further development as a prognostic test might be warranted
Interleukin 6, lipopolysaccharide-binding protein and interleukin 10 in the prediction of risk and etiologic patterns in patients with community-acquired pneumonia: results from the German competence network CAPNETZ
<p>Abstract</p> <p>Background</p> <p>The aim of our study was to investigate the predictive value of the biomarkers interleukin 6 (IL-6), interleukin 10 (IL-10) and lipopolysaccharide-binding protein (LBP) compared with clinical CRB and CRB-65 severity scores in patients with community-acquired pneumonia (CAP).</p> <p>Methods</p> <p>Samples and data were obtained from patients enrolled into the German CAPNETZ study group. Samples (blood, sputum and urine) were collected within 24 h of first presentation and inclusion in the CAPNETZ study, and CRB and CRB-65 scores were determined for all patients at the time of enrollment. The combined end point representative of a severe course of CAP was defined as mechanical ventilation, intensive care unit treatment and/or death within 30 days. Overall, a total of 1,000 patients were enrolled in the study. A severe course of CAP was observed in 105 (10.5%) patients.</p> <p>Results</p> <p>The highest IL-6, IL-10 and LBP concentrations were found in patients with CRB-65 scores of 3-4 or CRB scores of 2-3. IL-6 and LBP levels on enrollment in the study were significantly higher for patients with a severe course of CAP than for those who did not have severe CAP. In receiver operating characteristic analyses, the area under the curve values for of IL-6 (0.689), IL-10 (0.665) and LPB (0.624) in a severe course of CAP were lower than that of CRB-65 (0.764) and similar to that of CRB (0.69). The accuracy of both CRB and CRB-65 was increased significantly by including IL-6 measurements. In addition, higher cytokine concentrations were found in patients with typical bacterial infections compared with patients with atypical or viral infections and those with infection of unknown etiology. LBP showed the highest discriminatory power with respect to the etiology of infection.</p> <p>Conclusions</p> <p>IL-6, IL-10 and LBP concentrations were increased in patients with a CRB-65 score of 3-4 and a severe course of CAP. The concentrations of IL-6 and IL-10 reflected the severity of disease in patients with CAP. The predictive power of IL-6, IL-10 and LBP for a severe course of pneumonia was lower than that of CRB-65. Typical bacterial pathogens induced the highest LBP, IL-6 and IL-10 concentrations.</p
- …