3,569 research outputs found

    Use and misuse of multivariable approaches in interventional cardiology studies on drug-eluting stents: a systematic review.

    Get PDF
    Aims: Randomized clinical trials (RCTs) are the most reliable evidence, even if they require important resource and logistic efforts. Large, cost-free and real-world datasets may be easily accessed yielding to observational studies, but such analyses often lead to problematic results in the absence of careful methods, especially from a statistic point of view. We aimed to appraise the performance of current multivariable approaches in the estimation of causal treatment and effects in studies focusing on drug-eluting stents (DES). Methods and Results: Pertinent studies published in the literature were searched, selected, abstracted, and appraised for quality and validity features. Six studies with a logistic regression were included, all of them reporting more than 10 events for covariates and different length of follow-up, with an overall low risk of bias. Most of the 15 studies with a Cox proportional hazard analysis had a different follow-up, with less than 10 events for covariates, yielding an overall low or moderate risk of bias. Sixteen studies with propensity score were included: the most frequent method for variable selection was logistic regression, with underlying differences in follow-up and less than 10 events for covariate in most of them. Most frequently, calibration appraisal was not reported in the studies, on the contrary of discrimination appraisal, which was more frequently performed. In seventeen studies with propensity and matching, the latter was most commonly performed with a nearest neighbor-matching algorithm yet without appraisal in most of the studies of calibration or discrimination. Balance was evaluated in 46% of the studies, being obtained for all variables in 48% of them. Conclusions: Better exploitation and methodological appraisal of multivariable analysis is needed to improve the clinical and research impact and reliability of nonrandomized studies. (J Interven Cardiol 2012;**:1-1

    Hip fracture risk assessment: Artificial neural network outperforms conditional logistic regression in an age- and sex-matched case control study

    Get PDF
    Copyright @ 2013 Tseng et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Background - Osteoporotic hip fractures with a significant morbidity and excess mortality among the elderly have imposed huge health and economic burdens on societies worldwide. In this age- and sex-matched case control study, we examined the risk factors of hip fractures and assessed the fracture risk by conditional logistic regression (CLR) and ensemble artificial neural network (ANN). The performances of these two classifiers were compared. Methods - The study population consisted of 217 pairs (149 women and 68 men) of fractures and controls with an age older than 60 years. All the participants were interviewed with the same standardized questionnaire including questions on 66 risk factors in 12 categories. Univariate CLR analysis was initially conducted to examine the unadjusted odds ratio of all potential risk factors. The significant risk factors were then tested by multivariate analyses. For fracture risk assessment, the participants were randomly divided into modeling and testing datasets for 10-fold cross validation analyses. The predicting models built by CLR and ANN in modeling datasets were applied to testing datasets for generalization study. The performances, including discrimination and calibration, were compared with non-parametric Wilcoxon tests. Results - In univariate CLR analyses, 16 variables achieved significant level, and six of them remained significant in multivariate analyses, including low T score, low BMI, low MMSE score, milk intake, walking difficulty, and significant fall at home. For discrimination, ANN outperformed CLR in both 16- and 6-variable analyses in modeling and testing datasets (p?<?0.005). For calibration, ANN outperformed CLR only in 16-variable analyses in modeling and testing datasets (p?=?0.013 and 0.047, respectively). Conclusions - The risk factors of hip fracture are more personal than environmental. With adequate model construction, ANN may outperform CLR in both discrimination and calibration. ANN seems to have not been developed to its full potential and efforts should be made to improve its performance.National Health Research Institutes in Taiwa

    Development of statistical methodologies and risk models to perform real-time safety monitoring in interventional cardiology

    Get PDF
    Thesis (S.M.)--Harvard-MIT Division of Health Sciences and Technology, 2006.Vita.Includes bibliographical references (p. 52-56).Post-marketing surveillance of medical pharmaceuticals and devices has received a great deal of media, legislative, and academic attention in the last decade. Among medical devices, these have largely been due to a small number of highly publicized adverse events, some of them in the domain of cardiac surgery and interventional cardiology. Phase three clinical trials for these devices are generally underpowered to detect rare adverse event rates, are performed in near-optimal environments, and regulators face significant pressure to deliver important medical devices to the public in a timely fashion. All of these factors emphasize the importance of systematic monitoring of these devices after being released to the public, and the FDA and other regulatory agencies continue to struggle to perform this duty using a variety of voluntary and mandatory adverse event rate reporting policies. Data quality and comprehensiveness have generally suffered in this environment, and delayed awareness of potential problems. However, a number of mandatory reporting policies combined with improved standardization of data collection and definitions in the field of interventional cardiology and other clinical domains have provided recent opportunities for nearly "real-time" safety monitoring of medical device data.(cont.) Existing safety monitoring methodologies are non-medical in nature, and not well adapted to the relatively heterogeneous and noisy data common in medical applications. A web-based database-driven computer application was designed, and a number of experimental statistical methodologies were adapted from non-medical monitoring techniques as a proof of concept for the utility of an automated safety monitoring application. This application was successfully evaluated by comparing a local institution's drug-eluting stent in-hospital mortality rates to University of Michigan's bare-metal stent event rates. Sensitivity analyses of the experimental methodologies were performed, and a number of notable performance parameters were discovered. In addition, an evaluation of a number of well-validated external logistic regression models, and found that while population level estimation was well-preserved, individual estimation was compromised by application to external data. Subsequently, exploration of an alternative modeling technique, support vector machines, was performed in an effort to find a method with superior calibration performance for use in the safety monitoring application.by Michael E. Matheny.S.M

    SYNTAX score and Clinical SYNTAX score as predictors of very long-term clinical outcomes in patients undergoing percutaneous coronary interventions: a substudy of SIRolimus-eluting stent compared with pacliTAXel-eluting stent for coronary revascularization (SIRTAX) trial

    Get PDF
    Aims To investigate the ability of SYNTAX score and Clinical SYNTAX score (CSS) to predict very long-term outcomes in an all-comers population receiving drug-eluting stents. Methods and results The SYNTAX score was retrospectively calculated in 848 patients enrolled in the SIRolimus-eluting stent compared with pacliTAXel-Eluting Stent for coronary revascularization (SIRTAX) trial. The CSS was calculated using age, and baseline left ventricular ejection fraction and creatinine clearance. A stratified post hoc comparison was performed for all-cause mortality, cardiac death, myocardial infarction (MI), ischaemia-driven target lesion revascularization (TLR), definite stent thrombosis, and major adverse cardiac events (MACE) at 1- and 5-year follow-up. Tertiles for SYNTAX score and CSS were defined as SSLOW ≀7, 714 and CSSLOW ≀8.0, 8.0 17.0, respectively. Major adverse cardiac events rates were significantly higher in SSHIGH compared with SSLOW at 1- and 5-year follow-up, which was also seen at 5 years for all-cause mortality, cardiac death, MI, and TLR. Stratifying outcomes across CSS tertiles confirmed and augmented these results. Within CSSHIGH, 5-year MACE increased with use of paclitaxel- compared with sirolimus-eluting stents (34.7 vs. 21.3%, P= 0.008). SYNTAX score and CSS were independent predictors of 5-year MACE; CSS was an independent predictor for 5-year mortality. Areas-under-the-curve for SYNTAX score and CSS for 5-year MACE were 0.61 (0.56-0.65) and 0.62 (0.57-0.67), for 5-year all-cause mortality 0.58 (0.51-0.65) and 0.66 (0.59-0.73) and for 5-year cardiac death 0.63 (0.54-0.72) and 0.72 (0.63-0.81), respectively. Conclusion SYNTAX score and to a greater extent CSS were able to stratify risk for very long-term adverse clinical outcomes in an all-comers population receiving drug-eluting stents. Predictive accuracy for 5-year all-cause mortality was improved using CSS. Trial Registration Number: NCT0029766

    Derivation and Validation of a General Predictive Model for Long Term Risks for Mortality and Invasive Cardiovascular Interventions in Congenital Heart Disease

    Get PDF
    Introduction. Accurate assessment of prognosis is a key driver of clinical decision making in congenital heart disease (CHD), but is complicated because CHD represents such a diverse collection of conditions. The aim of this investigation is to derive, validate, and calibrate multivariable predictive models for time to surgical or catheter-mediated intervention (INT) in CHD and for time to death in CHD. Methods. 4108 unique subjects were prospectively and consecutively enrolled, and randomized to derivation and validation cohorts. Total follow up was 26,578 patient-years, with 102 deaths and 868 INTs. Accelerated failure time multivariable predictive models for the outcomes, based on primary and secondary diagnoses, pathophysiologic severity, age, gender, genetic comorbidities, and prior interventional history, were derived using piecewise exponential methodology. The model predictions were validated, calibrated, and evaluated for sensitivity to changes in the independent variables. Results. Model validity was excellent for prediction of both mortality and INT at 4 months, 1 year, 5 years, 10 years, and 22 years (areas under receiver operating characteristic curves ranged from 0.809 to 0.919), and predictions calibrated well with observed outcomes. Although age, gender, secondary diagnoses, and genetic comorbidities were significant independent contributors to the survival and/or freedom from intervention models, predicted outcomes were most sensitive to variations in a composite predictor incorporating primary diagnosis, pathophysiologic severity, and history of prior intervention. An active cohort effect is identified in which predicted mortality and intervention both increased throughout the 22 years of study. Conclusions. Time to INT and time to death in CHD can be predicted with accuracy based on clinical variables. The objective predictions available through these models could educate both patient and provider, and inform clinical decision making in CHD

    Postoperative Critical Care: Resource Availability, Patient Risk and Other Factors Influencing Referral and Admission

    Get PDF
    Although intended for benefit, surgery exposes patients to potential complications. Critical care is thought to protect against the development of these complications, and is recommended for patients at higher risk. However, previous literature suggests that high-risk patients do not consistently receive postoperative critical care. In this PhD thesis, I investigate the supposed misallocation of critical care resources, and seek to answer the following research questions: 1. What is the availability of postoperative critical care? 2. How do clinicians estimate perioperative risk? 3. How accurate are current available risk prediction tools? 4. How do clinicians decide which patients to admit for postoperative critical care? 5. What factors influence their admission? A survey of postoperative critical care availability was conducted in 309 hospitals across the United Kingdom, Australia and New Zealand (NZ). Then, in a subset of 274 of these hospitals, a cohort study enrolling 26,502 patients undergoing inpatient surgery was undertaken. Postoperative critical care availability was found to differ between countries. UK hospitals reported fewer critical care beds per 100 hospital beds (median = 2.7) compared with Australia (median = 3.7) and NZ (median = 3.5). Enhanced care/high-acuity beds used to manage some high-risk patients were identified in around 31% of hospitals. The estimated numbers of critical care beds per 100,000 population were 9.3, 14.1, and 9.1 in the UK, Australia, and NZ, respectively. The estimated per capita high-acuity bed capacities per 100,000 population were 1.2, 3.8, and 6.4 in the UK, Australia, and NZ, respectively. The risk profile of inpatients undergoing inpatient surgery and the incidence of short-term mortality and morbidity outcomes were described. Less than 40% of predicted high-risk patients (defined as having a 5% or higher predicted 30-day mortality) in the cohort were admitted to critical care directly after surgery, regardless of risk model used. Compared with objective risk tools, subjective clinical assessment performed similarly in terms of discrimination, but consistently overpredicted risk. The Area Under the Receiver Operating Characteristic curve (AUROC) for subjective clinical assessment was 0.89, compared to 0.91 for the Surgical Outcome Risk Tool (SORT), the best-performing objective risk tool. However, a model combining information from both objective tools and subjective assessment improved the accuracy and clinical applicability of risk predictions (combined model AUROC = 0.93; continuous Net Reclassification Index [NRI] = 0.347, p <0.001). Associations were identified between patient risk factors (e.g. increased comorbidities, more complex surgery, higher surgical urgency) and the likelihood of being recommended postoperative critical care admission. Increased critical care bed availability had a small but significant association with critical care recommendation (adjusted odds ratio [OR] = 1.05 per empty critical care bed at the time of surgery), suggesting a subtle effect of exogenous influences on clinical decision-making. These results will have value in informing policy around the delivery of postoperative care for high-risk patients undergoing surgery, both at a macroscopic level in planning services, and at a microscopic level in making clinical decisions for individual patients

    Clinical machine learning

    Get PDF

    Outcome prediction following transcatheter aortic valve implantation: Multiple risk scores comparison

    Get PDF
    Background: The aim of the study was to compare 7 available risk models in the prediction of 30-day mortality following transcatheter aortic valve implantation (TAVI). Heart team decision supported by different risk score calculations is advisable to estimate the individual procedural risk before TAVI. Methods: One hundred and fifty-six consecutive patients (n = 156, 48% female, mean age 80.03 ± 8.18 years) who underwent TAVI between March 2010 and October 2014 were in­cluded in the study. Thirty-day follow-up was performed and available in each patient. Base­line risk was calculated according to EuroSCORE I, EuroSCORE II, STS, ACEF, Ambler’s, OBSERVANT and SURTAVI scores. Results: In receiver operating characteristics analysis, neither of the investigated scales was able to distinguish between patients with or without an endpoint with areas under the curve (AUC) not exceeding 0.6, as follows: EuroSCORE I, AUC 0.55; 95% confidence intervals (CI) 0.47–0.63, p = 0.59; EuroSCORE II, AUC 0.59; 95% CI 0.51–0.67, p = 0.23; STS, AUC 0.55; 95% CI 0.47–0.63, p = 0.52; ACEF, AUC 0.54; 95% CI 0.46–0.62, p = 0.69; Ambler’s, AUC 0.54; 95% CI 0.46–0.62, p = 0.70; OBSERVANT, AUC 0.597; 95% CI 0.52–0.67, p = 0.21; SURTAVI, AUC 0.535; 95% CI 0.45–0.62, p = 0.65. SURTAVI model was calibrated best in high-risk patients showing coherence between expected and observed mortality (10.8% vs. 9.4%, p = 0.982). ACEF demonstrated best classification accuracy (17.5% vs. 6.9%, p = 0.053, observed mortality in high vs. non-high-risk cohort, respectively). Conclusions: None of the investigated risk scales proved to be optimal in predicting 30-day mortality in unselected, real-life population with aortic stenosis referred to TAVI. This data supports primary role of heart team in decision process of selecting patients for TAVI
    • 

    corecore