107 research outputs found

    Accounting for the mortality benefit of drug-eluting stents in percutaneous coronary intervention: a comparison of methods in a retrospective cohort study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Drug-eluting stents (DES) reduce rates of restenosis compared with bare metal stents (BMS). A number of observational studies have also found lower rates of mortality and non-fatal myocardial infarction with DES compared with BMS, findings not observed in randomized clinical trials. In order to explore reasons for this discrepancy, we compared outcomes after percutaneous coronary intervention (PCI) with DES or BMS by multiple statistical methods.</p> <p>Methods</p> <p>We compared short-term rates of all-cause mortality and myocardial infarction for patients undergoing PCI with DES or BMS using propensity-score adjustment, propensity-score matching, and a stent-era comparison in a large, integrated health system between 1998 and 2007. For the propensity-score adjustment and stent era comparisons, we used multivariable logistic regression to assess the association of stent type with outcomes. We used McNemar's Chi-square test to compare outcomes for propensity-score matching.</p> <p>Results</p> <p>Between 1998 and 2007, 35,438 PCIs with stenting were performed among health plan members (53.9% DES and 46.1% BMS). After propensity-score adjustment, DES was associated with significantly lower rates of death at 30 days (OR 0.49, 95% CI 0.39 - 0.63, <it>P </it>< 0.001) and one year (OR 0.58, 95% CI 0.49 - 0.68, <it>P </it>< 0.001), and a lower rate of myocardial infarction at one year (OR 0.72, 95% CI 0.59 - 0.87, <it>P </it>< 0.001). Thirty day and one year mortality were also lower with DES after propensity-score matching. However, a stent era comparison, which eliminates potential confounding by indication, showed no difference in death or myocardial infarction for DES and BMS, similar to results from randomized trials.</p> <p>Conclusions</p> <p>Although propensity-score methods suggested a mortality benefit with DES, consistent with prior observational studies, a stent era comparison failed to support this conclusion. Unobserved factors influencing stent selection in observational studies likely account for the observed mortality benefit of DES not seen in randomized clinical trials.</p

    Reporting of harm in randomized controlled trials evaluating stents for percutaneous coronary intervention

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The aim of this study was to assess the reporting of harm in randomized controlled trials evaluating stents for percutaneous coronary intervention.</p> <p>Methods</p> <p>The study design was a methodological systematic review of randomized controlled trials. The data sources were MEDLINE and the Cochrane Central Register of Controlled Trials. All reports of randomized controlled trials assessing stent treatment for coronary disease published between January 1, 2003, and September 30, 2008 were selected.</p> <p>A standardized abstraction form was used to extract data.</p> <p>Results</p> <p>132 articles were analyzed. Major cardiac adverse events (death, cardiac death, myocardial infarction or stroke) were reported as primary or secondary outcomes in 107 reports (81%). However, 19% of the articles contained no data on cardiac events. The mode of data collection of adverse events was given in 29 reports (22%) and a definition of expected adverse events was provided in 47 (36%). The length of follow-up was reported in 95 reports (72%). Assessment of adverse events by an adjudication committee was described in 46 reports (35%), and adverse events were described as being followed up for 6 months in 24% of reports (n = 32), between 7 to 12 months in 42% (n = 55) and for more than 1 year in 4% (n = 5). In 115 reports (87%), numerical data on the nature of the adverse events were reported per treatment arm. Procedural complications were described in 30 articles (23%). The causality of adverse events was reported in only 4 articles.</p> <p>Conclusion</p> <p>Several harm-related data were not adequately accounted for in articles of randomized controlled trials assessing stents for percutaneous coronary intervention.</p> <p>Trials Registration</p> <p>Trials manuscript: 5534201182098351 (T80802P)</p

    Drug-Eluting Stents in Patients with Chronic Kidney Disease: A Prospective Registry Study

    Get PDF
    BACKGROUND: Chronic kidney disease (CKD) is strongly associated with adverse outcomes after percutaneous coronary intervention (PCI). There are limited data on the effectiveness of drug-eluting stents (DES) in patients with CKD. METHODOLOGY/PRINCIPAL FINDINGS: Of 3,752 consecutive patients enrolled in the Guthrie PCI Registry between 2001 and 2006, 436 patients with CKD - defined as a creatinine clearance <60 mL/min - were included in this study. Patients who received DES were compared to those who received bare metal stents (BMS). Patients were followed for a mean duration of 3 years after the index PCI to determine the prognostic impact of stent type. Study end-points were all-cause death, myocardial infarction (MI), target vessel revascularization (TVR), stent thrombosis (ST) and the composite of major adverse cardiovascular events (MACE), defined as death, MI or TVR. Patients receiving DES in our study, by virtue of physician selection, had more stable coronary artery disease and had lower baseline risk of thrombotic or restenotic events. Kaplan-Meier estimates of proportions of patients reaching the end-points were significantly lower for DES vs. BMS for all-cause death (pβ€Š=β€Š0.0008), TVR (pβ€Š=β€Š0.029) and MACE (pβ€Š=β€Š0.0015), but not MI (pβ€Š=β€Š0.945) or ST (pβ€Š=β€Š0.88). Multivariable analysis with propensity adjustment demonstrated that DES implantation was an independent predictor of lower rates of all-cause death (hazard ratio [HR] 0.48, 95% confidence interval [CI] 0.25-0.92), TVR (HR 0.50, 95% CI 0.27-0.94) and MACE (HR 0.62, 95% CI 0.41-0.94). CONCLUSIONS: In a contemporary PCI registry, selective use of DES in patients with CKD was safe and effective in the long term, with lower risk of all-cause death, TVR and MACE and similar risk of MI and ST as compared with BMS. The mortality benefit may be a result of selection bias and residual confounding, or represent a true finding; a hypothesis that warrants clarification by randomized clinical trials

    Retention on Buprenorphine Is Associated with High Levels of Maximal Viral Suppression among HIV-Infected Opioid Dependent Released Prisoners

    Get PDF
    HIV-infected prisoners lose viral suppression within the 12 weeks after release to the community. This prospective study evaluates the use of buprenorphine/naloxone (BPN/NLX) as a method to reduce relapse to opioid use and sustain viral suppression among released HIV-infected prisoners meeting criteria for opioid dependence (OD).From 2005-2010, 94 subjects meeting DSM-IV criteria for OD were recruited from a 24-week prospective trial of directly administered antiretroviral therapy (DAART) for released HIV-infected prisoners; 50 (53%) selected BPN/NLX and were eligible to receive it for 6 months; the remaining 44 (47%) selected no BPN/NLX therapy. Maximum viral suppression (MVS), defined as HIV-1 RNA<50 copies/mL, was compared for the BPN/NLX and non-BPN/NLX (Nβ€Š=β€Š44) groups.The two groups were similar, except the BPN/NLX group was significantly more likely to be Hispanic (56.0% v 20.4%), from Hartford (74.4% v 47.7%) and have higher mean global health quality of life indicator scores (54.18 v 51.40). MVS after 24 weeks of being released was statistically correlated with 24-week retention on BPN/NLX [AORβ€Š=β€Š5.37 (1.15, 25.1)], having MVS at the time of prison-release [AORβ€Š=β€Š10.5 (3.21, 34.1)] and negatively with being Black [AORβ€Š=β€Š0.13 (0.03, 0.68)]. Receiving DAART or methadone did not correlate with MVS.In recognition that OD is a chronic relapsing disease, strategies that initiate and retain HIV-infected prisoners with OD on BPN/NLX is an important strategy for improving HIV treatment outcomes as a community transition strategy

    Integrating Prevention of Mother-to-Child HIV Transmission Programs to Improve Uptake: A Systematic Review

    Get PDF
    BACKGROUND: We performed a systematic review to assess the effect of integrated perinatal prevention of mother-to-child transmission of HIV interventions compared to non- or partially integrated services on the uptake in low- and middle-income countries. METHODS: We searched for experimental, quasi-experimental and controlled observational studies in any language from 21 databases and grey literature sources. RESULTS: Out of 28 654 citations retrieved, five studies met our inclusion criteria. A cluster randomized controlled trial reported higher probability of nevirapine uptake at the labor wards implementing HIV testing and structured nevirapine adherence assessment (RRR 1.37, bootstrapped 95% CI, 1.04-1.77). A stepped wedge design study showed marked improvement in antiretroviral therapy (ART) enrolment (44.4% versus 25.3%, p<0.001) and initiation (32.9% versus 14.4%, p<0.001) in integrated care, but the median gestational age of ART initiation (27.1 versus 27.7 weeks, p = 0.4), ART duration (10.8 versus 10.0 weeks, p = 0.3) or 90 days ART retention (87.8% versus 91.3%, p = 0.3) did not differ significantly. A cohort study reported no significant difference either in the ART coverage (55% versus 48% versus 47%, p = 0.29) or eight weeks of ART duration before the delivery (50% versus 42% versus 52%; p = 0.96) between integrated, proximal and distal partially integrated care. Two before and after studies assessed the impact of integration on HIV testing uptake in antenatal care. The first study reported that significantly more women received information on PMTCT (92% versus 77%, p<0.001), were tested (76% versus 62%, p<0.001) and learned their HIV status (66% versus 55%, p<0.001) after integration. The second study also reported significant increase in HIV testing uptake after integration (98.8% versus 52.6%, p<0.001). CONCLUSION: Limited, non-generalizable evidence supports the effectiveness of integrated PMTCT programs. More research measuring coverage and other relevant outcomes is urgently needed to inform the design of services delivering PMTCT programs

    Comparative Phylogeography of a Coevolved Community: Concerted Population Expansions in Joshua Trees and Four Yucca Moths

    Get PDF
    Comparative phylogeographic studies have had mixed success in identifying common phylogeographic patterns among co-distributed organisms. Whereas some have found broadly similar patterns across a diverse array of taxa, others have found that the histories of different species are more idiosyncratic than congruent. The variation in the results of comparative phylogeographic studies could indicate that the extent to which sympatrically-distributed organisms share common biogeographic histories varies depending on the strength and specificity of ecological interactions between them. To test this hypothesis, we examined demographic and phylogeographic patterns in a highly specialized, coevolved community – Joshua trees (Yucca brevifolia) and their associated yucca moths. This tightly-integrated, mutually interdependent community is known to have experienced significant range changes at the end of the last glacial period, so there is a strong a priori expectation that these organisms will show common signatures of demographic and distributional changes over time. Using a database of >5000 GPS records for Joshua trees, and multi-locus DNA sequence data from the Joshua tree and four species of yucca moth, we combined paleaodistribution modeling with coalescent-based analyses of demographic and phylgeographic history. We extensively evaluated the power of our methods to infer past population size and distributional changes by evaluating the effect of different inference procedures on our results, comparing our palaeodistribution models to Pleistocene-aged packrat midden records, and simulating DNA sequence data under a variety of alternative demographic histories. Together the results indicate that these organisms have shared a common history of population expansion, and that these expansions were broadly coincident in time. However, contrary to our expectations, none of our analyses indicated significant range or population size reductions at the end of the last glacial period, and the inferred demographic changes substantially predate Holocene climate changes

    Neoadjuvant chemotherapy prior to preoperative chemoradiation or radiation in rectal cancer: should we be more cautious?

    Get PDF
    Neoadjuvant chemotherapy (NACT) is a term originally used to describe the administration of chemotherapy preoperatively before surgery. The original rationale for administering NACT or so-called induction chemotherapy to shrink or downstage a locally advanced tumour, and thereby facilitate more effective local treatment with surgery or radiotherapy, has been extended with the introduction of more effective combinations of chemotherapy to include reducing the risks of metastatic disease. It seems logical that survival could be lengthened, or organ preservation rates increased in resectable tumours by NACT. In rectal cancer NACT is being increasingly used in locally advanced and nonmetastatic unresectable tumours. Randomised studies in advanced colorectal cancer show high response rates to combination cytotoxic therapy. This evidence of efficacy coupled with the introduction of novel molecular targeted therapies (such as Bevacizumab and Cetuximab), and long waiting times for radiotherapy have rekindled an interest in delivering NACT in locally advanced rectal cancer. In contrast, this enthusiasm is currently waning in other sites such as head and neck and nasopharynx cancer where traditionally NACT has been used. So, is NACT in rectal cancer a real advance or just history repeating itself? In this review, we aimed to explore the advantages and disadvantages of the separate approaches of neoadjuvant, concurrent and consolidation chemotherapy in locally advanced rectal cancer, drawing on theoretical principles, preclinical studies and clinical experience both in rectal cancer and other disease sites. Neoadjuvant chemotherapy may improve outcome in terms of disease-free or overall survival in selected groups in some disease sites, but this strategy has not been shown to be associated with better outcomes than postoperative adjuvant chemotherapy. In particular, there is insufficient data in rectal cancer. The evidence for benefit is strongest when NACT is administered before surgical resection. In contrast, the data in favour of NACT before radiation or chemoradiation (CRT) is inconclusive, despite the suggestion that response to induction chemotherapy can predict response to subsequent radiotherapy. The observation that spectacular responses to chemotherapy before radical radiotherapy did not result in improved survival, was noted 25 years ago. However, multiple trials in head and neck cancer, nasopharyngeal cancer, non-small-cell lung cancer, small-cell lung cancer and cervical cancer do not support the routine use of NACT either as an alternative, or as additional benefit to CRT. The addition of NACT does not appear to enhance local control over concurrent CRT or radiotherapy alone. Neoadjuvant chemotherapy before CRT or radiation should be used with caution, and only in the context of clinical trials. The evidence base suggests that concurrent CRT with early positioning of radiotherapy appears the best option for patients with locally advanced rectal cancer and in all disease sites where radiation is the primary local therapy
    • …
    corecore