19 research outputs found

    Clinical trial evidence supporting US Food and Drug Administration approval of novel cancer therapies between 2000 and 2016

    Get PDF
    Importance: Clinical trial evidence used to support drug approval is typically the only information on benefits and harms that patients and clinicians can use for decision-making when novel cancer therapies become available. Various evaluations have raised concern about the uncertainty surrounding these data, and a systematic investigation of the available information on treatment outcomes for cancer drugs approved by the US Food and Drug Administration (FDA) is warranted. Objective: To describe the clinical trial data available on treatment outcomes at the time of FDA approval of all novel cancer drugs approved for the first time between 2000 and 2016. Design, Setting, and Participants: This comparative effectiveness study analyzed randomized clinical trials and single-arm clinical trials of novel drugs approved for the first time to treat any type of cancer. Approval packages were obtained from drugs@FDA, a publicly available database containing information on drug and biologic products approved for human use in the US. Data from January 2000 to December 2016 were included in this study. Main Outcomes and Measures: Regulatory and clinical trial characteristics were described. For randomized clinical trials, summary treatment outcomes for overall survival, progression-free survival, and tumor response across all therapies were calculated, and median absolute survival increases were estimated. Tumor types and regulatory characteristics were assessed separately. Results: Between 2000 and 2016, 92 novel cancer drugs were approved by the FDA for 100 indications based on data from 127 clinical trials. The 127 clinical trials included a median of 191 participants (interquartile range [IQR], 106-448 participants). Overall, 65 clinical trials (51.2%) were randomized, and 95 clinical trials (74.8%) were open label. Of 100 indications, 44 indications underwent accelerated approval, 42 indications were for hematological cancers, and 58 indications were for solid tumors. Novel drugs had mean hazard ratios of 0.77 (95% CI, 0.73-0.81; I2 = 46%) for overall survival and 0.52 (95% CI, 0.47-0.57; I2 = 88%) for progression-free survival. The median tumor response, expressed as relative risk, was 2.37 (95% CI, 2.00-2.80; I2 = 91%). The median absolute survival benefit was 2.40 months (IQR, 1.25-3.89 months). Conclusions and Relevance: In this study, data available at the time of FDA drug approval indicated that novel cancer therapies were associated with substantial tumor responses but with prolonging median overall survival by only 2.40 months. Approval data from 17 years of clinical trials suggested that patients and clinicians typically had limited information available regarding the benefits of novel cancer treatments at market entry

    Marginal structural models and other analyses allow multiple estimates of treatment effects in randomized clinical trials: meta-epidemiological analysis

    Full text link
    Objective To determine how marginal structural models (MSMs), which are increasingly used to estimate causal effects, are used in randomized clinical trials (RCTs) and compare their results with those from intention-to-treat (ITT) or other analyses. Design and Setting We searched PubMed, Scopus, citations of key references, and Clinicaltrials.gov. Eligible RCTs reported clinical effects based on MSMs and at least one other analysis. Results We included 12 RCTs reporting 138 analyses for 24 clinical questions. In 19/24 (79%), MSM-based and other effect estimates were all in the same direction, 22/22 had overlapping 95%CIs, and in 19/22 (86%), the MSM-effect estimate lay within all 95%CIs of all other effects (in two cases no CIs were reported). For the same clinical question, the largest effect estimate from any analysis was 1.19-fold (median; IQR 1.13-1.34) larger than the smallest. All MSM and ITT-effect estimates were in the same direction and had overlapping 95% CIs. In 71% (12/17), they also agreed on the presence of statistical significance. MSM-based effect estimates deviated more from the null than those based on ITT (p=0.18). The effect estimates of both approaches differed 1.12-fold (median; IQR 1.02-1.22). Conclusions MSMs provided largely similar effect estimates as other available analyses. Nevertheless, some of the differences in effect estimates or statistical significance may become important in clinical decision-making and the multiple estimates require utmost attention of possible selective reporting bias

    Current use and costs of electronic health records for clinical trial research: a descriptive study

    Full text link
    Background: Electronic health records (EHRs) may support randomized controlled trials (RCTs). We aimed to describe the current use and costs of EHRs in RCTs, with a focus on recruitment and outcome assessment. Methods: This descriptive study was based on a PubMed search of RCTs published since 2000 that evaluated any medical intervention with the use of EHRs. Cost information was obtained from RCT investigators who used EHR infrastructures for recruitment or outcome measurement but did not explore EHR technology itself. Results: We identified 189 RCTs, most of which (153 [81.0%]) were carried out in North America and were published recently (median year 2012 [interquartile range 2009–2014]). Seventeen RCTs (9.0%) involving a median of 732 (interquartile range 73–2513) patients explored interventions not related to EHRs, including quality improvement, screening programs, and collaborative care and disease management interventions. In these trials, EHRs were used for recruitment (14 [82%]) and outcome measurement (15 [88%]). Overall, in most of the trials (158 [83.6%]), the outcome (including many of the most patient-relevant clinical outcomes, from unscheduled hospital admission to death) was measured with the use of EHRs. The per-patient cost in the 17 EHR-supported trials varied from US44toUS44 to US2000, and total RCT costs from US67750toUS67 750 to US5 026 000. In the remaining 172 RCTs (91.0%), EHRs were used as a modality of intervention. Interpretation: Randomized controlled trials are frequently and increasingly conducted with the use of EHRs, but mainly as part of the intervention. In some trials, EHRs were used successfully to support recruitment and outcome assessment. Costs may be reduced once the data infrastructure is established

    Nonrandomized studies using causal-modeling may give different answers than RCTs: a meta-epidemiological study

    Full text link
    To evaluate how estimated treatment effects agree between nonrandomized studies using causal modeling with marginal structural models (MSM-studies) and randomized trials (RCTs).; Meta-epidemiological study.; MSM-studies providing effect estimates on any healthcare outcome of any treatment were eligible. We systematically sought RCTs on the same clinical question and compared the direction of treatment effects, effect sizes, and confidence intervals.; The main analysis included 19 MSM-studies (1,039,570 patients) and 141 RCTs (120,669 patients). MSM-studies indicated effect estimates in the opposite direction from RCTs for eight clinical questions (42%), and their 95% CI (confidence interval) did not include the RCT estimate in nine clinical questions (47%). The effect estimates deviated 1.58-fold between the study designs (median absolute deviation OR [odds ratio] 1.58; IQR [interquartile range] 1.37 to 2.16). Overall, we found no systematic disagreement regarding benefit or harm but confidence intervals were wide (summary ratio of odds ratios [sROR] 1.04; 95% CI 0.88 to 1.23). The subset of MSM-studies focusing on healthcare decision-making tended to overestimate experimental treatment benefits (sROR 1.44; 95% CI 0.99 to 2.09).; Nonrandomized studies using causal modeling with MSM may give different answers than RCTs. Caution is still required when nonrandomized "real world" evidence is used for healthcare decisions

    Current use and costs of electronic health records for clinical trial research: a descriptive study

    Full text link
    Background: Electronic health records (EHRs) may support randomized controlled trials (RCTs). We aimed to describe the current use and costs of EHRs in RCTs, with a focus on recruitment and outcome assessment. Methods: This descriptive study was based on a PubMed search of RCTs published since 2000 that evaluated any medical intervention with the use of EHRs. Cost information was obtained from RCT investigators who used EHR infrastructures for recruitment or outcome measurement but did not explore EHR technology itself. Results: We identified 189 RCTs, most of which (153 [81.0%]) were carried out in North America and were published recently (median year 2012 [interquartile range 2009–2014]). Seventeen RCTs (9.0%) involving a median of 732 (interquartile range 73–2513) patients explored interventions not related to EHRs, including quality improvement, screening programs, and collaborative care and disease management interventions. In these trials, EHRs were used for recruitment (14 [82%]) and outcome measurement (15 [88%]). Overall, in most of the trials (158 [83.6%]), the outcome (including many of the most patient-relevant clinical outcomes, from unscheduled hospital admission to death) was measured with the use of EHRs. The per-patient cost in the 17 EHR-supported trials varied from US44toUS44 to US2000, and total RCT costs from US67750toUS67 750 to US5 026 000. In the remaining 172 RCTs (91.0%), EHRs were used as a modality of intervention. Interpretation: Randomized controlled trials are frequently and increasingly conducted with the use of EHRs, but mainly as part of the intervention. In some trials, EHRs were used successfully to support recruitment and outcome assessment. Costs may be reduced once the data infrastructure is established

    Marginal structural models and other analyses allow multiple estimates of treatment effects in randomized clinical trials: Meta-epidemiological analysis

    Full text link
    To determine how marginal structural models (MSMs), which are increasingly used to estimate causal effects, are used in randomized clinical trials (RCTs) and compare their results with those from intention-to-treat (ITT) or other analyses.; We searched PubMed, Scopus, citations of key references, and Clinicaltrials.gov. Eligible RCTs reported clinical effects based on MSMs and at least one other analysis.; We included 12 RCTs reporting 138 analyses for 24 clinical questions. In 19/24 (79%), MSM-based and other effect estimates were all in the same direction, 22/22 had overlapping 95% confidence intervals (CIs), and in 19/22 (86%), the MSM effect estimate lay within all 95% CIs of all other effects (in two cases no CIs were reported). For the same clinical question, the largest effect estimate from any analysis was 1.19-fold (median; interquartile range 1.13-1.34) larger than the smallest. All MSM and ITT effect estimates were in the same direction and had overlapping 95% CIs. In 71% (12/17), they also agreed on the presence of statistical significance. MSM-based effect estimates deviated more from the null than those based on ITT (P = 0.18). The effect estimates of both approaches differed 1.12-fold (median; interquartile range 1.02-1.22).; MSMs provided largely similar effect estimates as other available analyses. Nevertheless, some of the differences in effect estimates or statistical significance may become important in clinical decision-making, and the multiple estimates require utmost attention of possible selective reporting bias

    Off-label treatments were not consistently better or worse than approved drug treatments in randomized trials

    Full text link
    Off-label drug use is highly prevalent but controversial and often discouraged assuming generally inferior medical effects associated with off-label use.; We searched PubMed, MEDLINE, PubMed Health, and the Cochrane Library up to May 2015 for systematic reviews including meta-analyses of randomized clinical trials (RCTs) comparing off-label and approved drugs head-to-head in any population and on any medical outcome. We combined the comparative effects in meta-analyses providing summary odds ratios (sOR) for each treatment comparison and outcome, and then calculated an overall summary of the sOR across all comparisons (ssOR).; We included 25 treatment comparisons with 153 RCTs and 24,592 patients. In six of 25 comparisons (24%), off-label drugs were significantly superior (five of 25) or inferior (one of 25) to approved treatments. There was substantial statistical heterogeneity across comparisons (I2 = 43%). Overall, off-label drugs were more favorable than approved treatments (ssOR 0.72; 95% CI = 0.54-0.95). Analyses of patient-relevant outcomes were similar (statistical significant differences in 24% (six of 25); ssOR 0.74; 95% CI = 0.56-0.98; I2 = 60%). Analyses of primary outcomes of the systematic reviews (n = 22 comparisons) indicated less heterogeneity and no statistically significant difference overall (ssOR 0.85; 95% CI = 0.67-1.06; I2 = 0%).; Approval status does not reliably indicate which drugs are more favorable in situations with clinical trial evidence comparing off-label with approved use. Drug effectiveness assessments without considering off-label use may provide incomplete information. To ensure that patients receive the best available care, funding, policy, reimbursement, and treatment decisions should be evidence based considering the entire spectrum of available therapeutic choices

    Interpretation of epidemiologic studies very often lacked adequate consideration of confounding

    Full text link
    Confounding bias is a most pervasive threat to validity of observational epidemiologic research. We assessed whether authors of observational epidemiologic studies consider confounding bias when interpreting the findings.; We randomly selected 120 cohort or case-control studies published in 2011 and 2012 by the general medical, epidemiologic, and specialty journals with the highest impact factors. We used Web of Science to assess citation metrics through January 2017.; Sixty-eight studies (56.7%, 95% confidence interval: 47.8-65.5%) mentioned "confounding" in the Abstract or Discussion sections, another 20 (16.7%; 10.0-23.3%) alluded to it, and there was no mention or allusion at all in 32 studies (26.7%; 18.8-34.6%). Authors often acknowledged that for specific confounders, there was no adjustment (34 studies; 28.3%) or deem it possible or likely that confounding affected their main findings (29 studies; 24.2%). However, only two studies (1.7%; 0-4.0%) specifically used the words "caution" or "cautious" for the interpretation because of confounding-related reasons and eventually only four studies (3.3%; 0.1-6.5%) had limitations related to confounding or any other bias in their Conclusions. Studies mentioning that the findings were possibly or likely affected by confounding were more frequently cited than studies with a statement that findings were unlikely affected (median 6.3 vs. 4.0 citations per year, P = 0.04).; Many observational studies lack satisfactory discussion of confounding bias. Even when confounding bias is mentioned, authors are typically confident that it is rather irrelevant to their findings and they rarely call for cautious interpretation. More careful acknowledgment of possible impact of confounding is not associated with lower citation impact
    corecore