42 research outputs found

    Dogmatists Cannot Learn

    Get PDF
    We wish to provide some rational in defense of the title of this comment. If we agree that a dogmatist is one whose beliefs cannot be influenced by observations (i.e., data), and we define learning as having your beliefs influenced by observations, then it follows that dogmatists cannot learn. While this statement has been made previously, (p.47) we believe it is useful to expand on this point with a simple example. Indeed, some dangers of dogmatism in epidemiology have been documented, and one does not have to look far for trivial examples of dogmatic statements

    Safety of dynamic intravenous iron administration strategies in hemodialysis patients

    Get PDF
    Background and objectives Intravenous iron therapy for chronic anemia management is largely driven by dosing protocols that differ in intensity with respect to dosing approach (i.e., dose, frequency, and duration). Little is known about the safety of these protocols. Design, setting, participants, & measurements Using clinical data from a large United States dialysis provider linked to health care utilization data from Medicare, we constructed a cohort of patients with ESKD aged ≥65 years who initiated and continued center-based hemodialysis for ≥90 days between 2009 and 2012, and initiated at least one of the five common intravenous iron administration strategies; ranked by intensity (the amount of iron given at moderate-to-high iron indices), the order of strategies was 3 (least intensive), 2 (less intensive), 1 (reference), 4 (more intensive), and 5 (most intensive). We estimated the effect of continuous exposure to these strategies on cumulative risks of mortality and infection-related events with dynamic Cox marginal structural models. Results Of 13,249 eligible patients, 1320 (10%) died and 1627 (12%) had one or more infection-related events during the 4-month follow-up. The most and least commonly initiated strategy was strategy 2 and 5, respectively. Compared with the reference strategy 1, more intensive strategies (4 and 5) demonstrated a higher risk of all-cause mortality (e.g., most intensive strategy 5: 60-day risk difference: 1.3%; 95% confidence interval [95% CI], 0.8% to 2.1%; 120-day risk difference: 3.1%; 95% CI, 1.0% to 5.6%). Similarly, higher risks were observed for infection-related morbidity and mortality among more intensive strategies (e.g., strategy 5: 60-day risk difference: 1.8%; 95% CI, 1.2% to 2.6%; 120-day risk difference: 4.3%; 95% CI, 2.2% to 6.8%). Less intensive strategies (2 and 3) demonstrated lower risks of all-cause mortality and infection-related events. Conclusions Among dialysis patients surviving 90 days, subsequent intravenous iron administration strategies promoting more intensive iron treatment at moderate-to-high iron indices levels are associated with higher risks of mortality and infection-related events

    Analytic strategies to adjust confounding using exposure propensity scores and disease risk scores: Nonsteroidal antiinflammatory drugs and short-term mortality in the elderly

    Get PDF
    Little is known about optimal application and behavior of exposure propensity scores (EPS) in small studies. In a cohort of 103,133 elderly Medicaid beneficiaries in New Jersey, the effect of nonsteroidal antiinflammatory drug use on 1-year all-cause mortality was assessed (1995-1997) based on the assumption that there is no protective effect and that the preponderance of any observed effect would be confounded. To study the comparative behavior of EPS, disease risk scores, and "conventional" disease models, the authors randomly resampled 1,000 subcohorts of 10,000, 1,000, and 500 persons. The number of variables was limited in disease models, but not EPS and disease risk scores. Estimated EPS were used to adjust for confounding by matching, inverse probability of treatment weighting, stratification, and modeling. The crude rate ratio of death was 0.68 for users of nonsteroidal antiinflammatory drugs. "Conventional" adjustment resulted in a rate ratio of 0.80 (95% confidence interval: 0.77, 0.84). The rate ratio closest to 1 (0.85) was achieved by inverse probability of treatment weighting (95% confidence interval: 0.82, 0.88). With decreasing study size, estimates remained further from the null value, which was most pronounced for inverse probability of treatment weighting (n = 500: rate ratio = 0.72, 95% confidence interval: 0.26, 1.68). In this setting, analytic strategies using EPS or disease risk scores were not generally superior to "conventional" models. Various ways to use EPS and disease risk scores behaved differently with smaller study size

    Left truncation bias to explain the protective effect of smoking on preeclampsia potential, but how plausible?

    Get PDF
    Background: An inverse association between maternal smoking and preeclampsia has been frequently observed in epidemiologic studies for several decades. In the May 2015 issue of this journal, Lisonkova and Joseph described a simulation study suggesting that bias from left truncation might explain the inverse association. The simulations were based on strong assumptions regarding the underlying mechanisms through which bias might occur. Methods: To examine the sensitivity of the previous authors' conclusions to these assumptions, we constructed a new Monte Carlo simulation using published estimates to frame our data-generating parameters. We estimated the association between smoking and preeclampsia across a range of scenarios that incorporated abnormal placentation and early pregnancy loss. Results: Our results confirmed that the previous authors' findings are highly dependent on assumptions regarding the strength of association between abnormal placentation and preeclampsia. Thus, the bias they described may be less pronounced than was suggested. Conclusions: Under empirically derived constraints of these critical assumptions, left truncation does not appear to fully explain the inverse association between smoking and preeclampsia. Furthermore, when considering processes in which left truncation may result from the exposure, it is important to precisely describe the target population and parameter of interest before assessing potential bias. We comment on the specification of a meaningful target population when assessing maternal smoking and preeclampsia as a public health issue. We describe considerations for defining a target population in studies of perinatal exposures when those exposures cause competing events (e.g., early pregnancy loss) for primary outcomes of interest

    Approaches to Address Premature Death of Patients When Assessing Patterns of Use of Health Care Services after an Index Event

    Get PDF
    Background: Studies of the use of health care after the onset of disease are important for assessing quality of care, treatment disparities, and guideline compliance. Cohort definition and analysis method are important considerations for the generalizability and validity of study results. We compared different approaches for cohort definition (restriction by survival time vs. comorbidity score) and analysis method [Kaplan-Meier (KM) vs. competing risk] when assessing patterns of guideline adoption in elderly patients. Methods: Medicare beneficiaries aged 65-95 years old who had an acute myocardial infarction (AMI) in 2008 were eligible for this study. Beneficiaries with substantial frailty or an AMI in the prior year were excluded. We compared KM with competing risk estimates of guideline adoption during the first year post-AMI. Results: At 1-year post-AMI, 14.2% [95% confidence interval (CI), 14.0%-14.5%) of beneficiaries overall initiated cardiac rehabilitation when using competing risk analysis and 15.1% (95% CI, 14.8%-15.3%) from the KM analysis. Guideline medication adoption was estimated as 52.3% (95% CI, 52.0%-52.7%) and 53.4% (95% CI, 53.1%-53.8%) for competing risk and KM methods, respectively. Mortality was 17.0% (95%CI, 16.8%-17.3%) at 1 year post-AMI. The difference in cardiac rehabilitation initiation at 1-year post-AMI from the overall population was 0.1%, 1.7%, and 1.9% compared with 30-day survivor, 1-year survivor, and comorbidity-score restricted populations, respectively. Conclusions: In this study, the KM method consistently overestimated the competing risk method. Competing risk approaches avoid unrealistic mortality assumptions and lead to interpretations of estimates that are more meaningful

    A Per-Protocol Analysis Using Inverse-Probability-of-Censoring Weights in a Randomized Trial of Initial Protease Inhibitor Versus Nonnucleoside Reverse Transcriptase Inhibitor Regimens in Children

    Get PDF
    Protocol adherence may influence measured treatment effectiveness in randomized controlled trials. Using data from a multicenter trial (Europe and the Americas, 2002-2009) of children with human immunodeficiency virus type 1 who had been randomized to receive initial protease inhibitor (PI) versus nonnucleoside reverse transcriptase inhibitor (NNRTI) antiretroviral therapy regimens, we generated time-to-event intention-to-treat (ITT) estimates of treatment effectiveness, applied inverse-probability-of-censoring weights to generate per-protocol efficacy estimates, and compared shifts from ITT to per-protocol estimates across and within treatment arms. In ITT analyses, 263 participants experienced 4-year treatment failure probabilities of 41.3% for PIs and 39.5% for NNRTIs (risk difference = 1.8% (95% confidence interval (CI): -10.1, 13.7); hazard ratio = 1.09 (95% CI: 0.74, 1.60)). In per-protocol analyses, failure probabilities were 35.6% for PIs and 29.2% for NNRTIs (risk difference = 6.4% (95% CI: -6.7, 19.4); hazard ratio = 1.30 (95% CI: 0.80, 2.12)). Within-arm shifts in failure probabilities from ITT to per-protocol analyses were 5.7% for PIs and 10.3% for NNRTIs. Protocol nonadherence was nondifferential across arms, suggesting that possibly better NNRTI efficacy may have been masked by differences in within-arm shifts deriving from differential regimen forgiveness, residual confounding, or chance. A per-protocol approach using inverse-probability-of-censoring weights facilitated evaluation of relationships among adherence, efficacy, and forgiveness applicable to pediatric oral antiretroviral regimens

    Influenza vaccine effectiveness in patients on hemodialysis: An analysis of a natural experiment

    Get PDF
    Background Although the influenza vaccine is recommended for patients with end-stage renal disease, little is known about its effectiveness. Observational studies of vaccine effectiveness (VE) are challenging because vaccinated subjects may be healthier than unvaccinated subjects. Methods Using US Renal Data System data, we estimated VE for influenza-like illness, influenza/pneumonia hospitalization, and mortality in adult patients undergoing hemodialysis by using a natural experiment created by the year-to-year variation in the match of the influenza vaccine to the circulating virus. We compared vaccinated patients in matched years (1998, 1999, and 2001) with a mismatched year (1997) using Cox proportional hazards models. Ratios of hazard ratios compared vaccinated patients between 2 years and unvaccinated patients between 2 years. We calculated VE as 1 − effect measure. Results Vaccination rates were less than 50% each year. Conventional analysis comparing vaccinated with unvaccinated patients produced average VE estimates of 13%, 16%, and 30% for influenza-like illness, influenza/pneumonia hospitalization, and mortality, respectively. When restricted to the preinfluenza period, results were even stronger, indicating bias. The pooled ratio of hazard ratios comparing matched seasons with a placebo season resulted in a VE of 0% (95% CI, −3% to 2%) for influenza-like illness, 2% (−2% to 5%) for hospitalization, and 0% (−3% to 3%) for death. Conclusions Relative to a mismatched year, we found little evidence of increased VE in subsequent well-matched years, suggesting that the current influenza vaccine strategy may have a smaller effect on morbidity and mortality in the end-stage renal disease population than previously thought. Alternate strategies (eg, high-dose vaccine, adjuvanted vaccine, and multiple doses) should be investigated

    Reply to: comparative effectiveness medicines research cannot assess efficacy

    Get PDF
    We appreciate the insightful comments from Drs. Dal-Ré and Carcas in their letter to the editor regarding our paper “Publication of comparative effectiveness research (CER) has not increased in high-impact medical journals, 2004–2013

    Patterns of Rotavirus Vaccine Uptake and Use in Privately-Insured US Infants, 2006-2010

    Get PDF
    Rotavirus vaccines are highly effective at preventing gastroenteritis in young children and are now universally recommended for infants in the US. We studied patterns of use of rotavirus vaccines among US infants with commercial insurance. We identified a large cohort of infants in the MarketScan Research Databases, 2006-2010. The analysis was restricted to infants residing in states without state-funded rotavirus vaccination programs. We computed summary statistics and used multivariable regression to assess the association between patient-, provider-, and ecologic-level variables of rotavirus vaccine receipt and series completion. Approximately 69% of 594,117 eligible infants received at least one dose of rotavirus vaccine from 2006-2010. Most infants received the rotavirus vaccines at the recommended ages, but more infants completed the series for monovalent rotavirus vaccine than pentavalent rotavirus vaccine or a mix of the vaccines (87% versus 79% versus 73%, P<0.001). In multivariable analyses, the strongest predictors of rotavirus vaccine series initiation and completion were receipt of the diphtheria, tetanus and acellular pertussis vaccine (Initiation: RR = 7.91, 95% CI = 7.69-8.13; Completion: RR = 1.26, 95% CI = 1.23-1.29), visiting a pediatrician versus family physician (Initiation: RR = 1.51, 95% CI = 1.49-1.52; Completion: RR = 1.13, 95% CI = 1.11-1.14), and living in a large metropolitan versus smaller metropolitan, urban, or rural area. We observed rapid diffusion of the rotavirus vaccine in routine practice; however, approximately one-fifth of infants did not receive at least one dose of vaccine as recently as 2010. Interventions to increase rotavirus vaccine coverage should consider targeting family physicians and encouraging completion of the vaccine series

    Publication of comparative effectiveness research has not increased in high-impact medical journals, 2004-2013

    Get PDF
    Objective To explore the impact of increasing interest and investment in patient-centered research, this study sought to describe patterns of comparative effectiveness research (CER) and patient-reported outcomes (PROs) in pharmacologic intervention studies published in widely read medical journals from 2004-2013. Design and Setting We identified 2335 articles published in five widely read medical journals from 2004-2013 with ≥1 intervention meeting the US Food and Drug Administration's definitions for a drug, biologic, or vaccine. Six trained reviewers extracted characteristics from a 20% random sample of articles (468 studies). We calculated the proportion of studies with CER and PROs. Trends were summarized using locally-weighted means and 95% confidence intervals. Results Of the 468 sampled studies, 30% used CER designs and 33% assessed PROs. The proportion of studies using CER designs did not meaningfully increase over the study period. However, we observed an increase in the use of PROs. Conclusions Among pharmacological intervention studies published in widely read medical journals from 2004-2013, we identified no increase in CER. Randomized, placebo-controlled trials continue to be the dominant study design for assessing pharmacologic interventions. Increasing trends in PRO use may indicate greater acceptance of these outcomes as evidence for clinical benefit
    corecore