73 research outputs found

    Estimating the sample size of sham-controlled randomized controlled trials using existing evidence [version 2; peer review: 2 approved].

    Get PDF
    Background: In randomized controlled trials (RCTs), the power is often 'reverse engineered' based on the number of participants that can realistically be achieved. An attractive alternative is planning a new trial conditional on the available evidence; a design of particular interest in RCTs that use a sham control arm (sham-RCTs). Methods: We explore the design of sham-RCTs, the role of sequential meta-analysis and  conditional planning in a systematic review of renal sympathetic denervation for patients with arterial hypertension. The main efficacy endpoint was mean change in 24-hour systolic blood pressure. We performed sequential meta-analysis to identify the time point where the null hypothesis would be rejected in a prospective scenario. Evidence-based conditional sample size calculations were performed based on fixed-effect meta-analysis. Results: In total, six sham-RCTs (981 participants) were identified. The first RCT was considerably larger (535 participants) than those subsequently published (median sample size of 80). All trial sample sizes were calculated assuming an unrealistically large intervention effect which resulted in low power when each study is considered as a stand-alone experiment. Sequential meta-analysis provided firm evidence against the null hypothesis with the synthesis of the first four trials (755 patients, cumulative mean difference -2.75 (95%CI -4.93 to -0.58) favoring the active intervention)). Conditional planning resulted in much larger sample sizes compared to those in the original trials, due to overoptimistic expected effects made by the investigators in individual trials, and potentially a time-effect association. Conclusions: Sequential meta-analysis of sham-RCTs can reach conclusive findings earlier and hence avoid exposing patients to sham-related risks. Conditional planning of new sham-RCTs poses important challenges as many surgical/minimally invasive procedures improve over time, the intervention effect is expected to increase in new studies and this violates the underlying assumptions. Unless this is accounted for, conditional planning will not improve the design of sham-RCTs

    CINeMA:Software for semi-automated assessment of the Confidence In the results of Network Meta-Analysis

    Get PDF
    Network meta‐analysis (NMA) compares several interventions that are linked in a network of comparative studies and estimates the relative treatment effects between all treatments, using both direct and indirect evidence. NMA is increasingly used for decision making in health care, however, a user‐friendly system to evaluate the confidence that can be placed in the results of NMA is currently lacking. This paper is a tutorial describing the Confidence In Network Meta‐Analysis (CINeMA) web application, which is based on the framework developed by Salanti et al (2014, PLOS One, 9, e99682) and refined by Nikolakopoulou et al (2019, bioRxiv). Six domains that affect the level of confidence in the NMA results are considered: (a) within‐study bias, (b) reporting bias, (c) indirectness, (d) imprecision, (e) heterogeneity, and (f) incoherence. CINeMA is freely available and open‐source and no login is required. In the configuration step users upload their data, produce network plots and define the analysis and effect measure. The dataset should include assessments of study‐level risk of bias and judgments on indirectness. CINeMA calls the netmeta routine in R to estimate relative effects and heterogeneity. Users are then guided through a systematic evaluation of the six domains. In this way reviewers assess the level of concerns for each relative treatment effect from NMA as giving rise to “no concerns,” “some concerns,” or “major concerns” in each of the six domains, which are graphically summarized on the report page for all effect estimates. Finally, judgments across the domains are summarized into a single confidence rating (“high,” “moderate,” “low,” or “very low”). In conclusion, the user‐friendly web‐based CINeMA platform provides a transparent framework to evaluate evidence from systematic reviews with multiple interventions

    The statistical importance of a study for a network meta-analysis estimate.

    Get PDF
    BACKGROUND In pairwise meta-analysis, the contribution of each study to the pooled estimate is given by its weight, which is based on the inverse variance of the estimate from that study. For network meta-analysis (NMA), the contribution of direct (and indirect) evidence is easily obtained from the diagonal elements of a hat matrix. It is, however, not fully clear how to generalize this to the percentage contribution of each study to a NMA estimate. METHODS We define the importance of each study for a NMA estimate by the reduction of the estimate's variance when adding the given study to the others. An equivalent interpretation is the relative loss in precision when the study is left out. Importances are values between 0 and 1. An importance of 1 means that the study is an essential link of the pathway in the network connecting one of the treatments with another. RESULTS Importances can be defined for two-stage and one-stage NMA. These numbers in general do not add to one and thus cannot be interpreted as 'percentage contributions'. After briefly discussing other available approaches, we question whether it is possible to obtain unique percentage contributions for NMA. CONCLUSIONS Importances generalize the concept of weights in pairwise meta-analysis in a natural way. Moreover, they are uniquely defined, easily calculated, and have an intuitive interpretation. We give some real examples for illustration

    Introducing the Treatment Hierarchy Question in Network Meta-Analysis

    Get PDF
    Comparative effectiveness research using network meta-analysis can present a hierarchy of competing treatments, from the most to the least preferable option. However, in published reviews, the research question associated with the hierarchy of multiple interventions is typically not clearly defined. Here we introduce the novel notion of a treatment hierarchy question that describes the criterion for choosing a specific treatment over one or more competing alternatives. For example, stakeholders might ask which treatment is most likely to improve mean survival by at least 2 years, or which treatment is associated with the longest mean survival. We discuss the most commonly used ranking metrics (quantities that compare the estimated treatment-specific effects), how the ranking metrics produce a treatment hierarchy, and the type of treatment hierarchy question that each ranking metric can answer. We show that the ranking metrics encompass the uncertainty in the estimation of the treatment effects in different ways, which results in different treatment hierarchies. When using network meta-analyses that aim to rank treatments, investigators should state the treatment hierarchy question they aim to address and employ the appropriate ranking metric to answer it. Following this new proposal will avoid some controversies that have arisen in comparative effectiveness research

    netmeta: An R Package for Network Meta-Analysis Using Frequentist Methods

    Get PDF
    Network meta-analysis compares different interventions for the same condition, by combining direct and indirect evidence derived from all eligible studies. Network metaanalysis has been increasingly used by applied scientists and it is a major research topic for methodologists. This article describes the R package netmeta, which adopts frequentist methods to fit network meta-analysis models. We provide a roadmap to perform network meta-analysis, along with an overview of the main functions of the package. We present three worked examples considering different types of outcomes and different data formats to facilitate researchers aiming to conduct network meta-analysis with netmeta

    Planning a future randomized clinical trial based on a network of relevant past trials

    Get PDF
    Background The important role of network meta-analysis of randomized clinical trials in health technology assessment and guideline development is increasingly recognized. This approach has the potential to obtain conclusive results earlier than with new standalone trials or conventional, pairwise meta-analyses. Methods Network meta-analyses can also be used to plan future trials. We introduce a four-step framework that aims to identify the optimal design for a new trial that will update the existing evidence while minimizing the required sample size. The new trial designed within this framework does not need to include all competing interventions and comparisons of interest and can contribute direct and indirect evidence to the updated network meta-analysis. We present the method by virtually planning a new trial to compare biologics in rheumatoid arthritis and a new trial to compare two drugs for relapsing-remitting multiple sclerosis. Results A trial design based on updating the evidence from a network meta-analysis of relevant previous trials may require a considerably smaller sample size to reach the same conclusion compared with a trial designed and analyzed in isolation. Challenges of the approach include the complexity of the methodology and the need for a coherent network meta-analysis of previous trials with little heterogeneity. Conclusions When used judiciously, conditional trial design could significantly reduce the required resources for a new study and prevent experimentation with an unnecessarily large number of participants

    Extensions of the probabilistic ranking metrics of competing treatments in network meta-analysis to reflect clinically important relative differences on many outcomes.

    Get PDF
    One of the key features of network meta-analysis is ranking of interventions according to outcomes of interest. Ranking metrics are prone to misinterpretation because of two limitations associated with the current ranking methods. First, differences in relative treatment effects might not be clinically important and this is not reflected in the ranking metrics. Second, there are no established methods to include several health outcomes in the ranking assessments. To address these two issues, we extended the P-score method to allow for multiple outcomes and modified it to measure the mean extent of certainty that a treatment is better than the competing treatments by a certain amount, for example, the minimum clinical important difference. We suggest to present the tradeoff between beneficial and harmful outcomes allowing stakeholders to consider how much adverse effect they are willing to tolerate for specific gains in efficacy. We used a published network of 212 trials comparing 15 antipsychotics and placebo using a random effects network meta-analysis model, focusing on three outcomes; reduction in symptoms of schizophrenia in a standardized scale, all-cause discontinuation, and weight gain

    CINeMA: An approach for assessing confidence in the results of a network meta-analysis.

    Get PDF
    BACKGROUND The evaluation of the credibility of results from a meta-analysis has become an important part of the evidence synthesis process. We present a methodological framework to evaluate confidence in the results from network meta-analyses, Confidence in Network Meta-Analysis (CINeMA), when multiple interventions are compared. METHODOLOGY CINeMA considers 6 domains: (i) within-study bias, (ii) reporting bias, (iii) indirectness, (iv) imprecision, (v) heterogeneity, and (vi) incoherence. Key to judgments about within-study bias and indirectness is the percentage contribution matrix, which shows how much information each study contributes to the results from network meta-analysis. The contribution matrix can easily be computed using a freely available web application. In evaluating imprecision, heterogeneity, and incoherence, we consider the impact of these components of variability in forming clinical decisions. CONCLUSIONS Via 3 examples, we show that CINeMA improves transparency and avoids the selective use of evidence when forming judgments, thus limiting subjectivity in the process. CINeMA is easy to apply even in large and complicated networks

    An investigation of the impact of using different methods for network meta-analysis: a protocol for an empirical evaluation.

    Get PDF
    BACKGROUND: Network meta-analysis, a method to synthesise evidence from multiple treatments, has increased in popularity in the past decade. Two broad approaches are available to synthesise data across networks, namely, arm- and contrast-synthesis models, with a range of models that can be fitted within each. There has been recent debate about the validity of the arm-synthesis models, but to date, there has been limited empirical evaluation comparing results using the methods applied to a large number of networks. We aim to address this gap through the re-analysis of a large cohort of published networks of interventions using a range of network meta-analysis methods. METHODS: We will include a subset of networks from a database of network meta-analyses of randomised trials that have been identified and curated from the published literature. The subset of networks will include those where the primary outcome is binary, the number of events and participants are reported for each direct comparison, and there is no evidence of inconsistency in the network. We will re-analyse the networks using three contrast-synthesis methods and two arm-synthesis methods. We will compare the estimated treatment effects, their standard errors, treatment hierarchy based on the surface under the cumulative ranking (SUCRA) curve, the SUCRA value, and the between-trial heterogeneity variance across the network meta-analysis methods. We will investigate whether differences in the results are affected by network characteristics and baseline risk. DISCUSSION: The results of this study will inform whether, in practice, the choice of network meta-analysis method matters, and if it does, in what situations differences in the results between methods might arise. The results from this research might also inform future simulation studies

    Estimating the contribution of studies in network meta-analysis: paths, flows and streams [version 1; referees: 2 approved, 1 approved with reservations]

    Get PDF
    In network meta-analysis, it is important to assess the influence of the limitations or other characteristics of individual studies on the estimates obtained from the network. The percentage contribution matrix, which shows how much each direct treatment effect contributes to each treatment effect estimate from network meta-analysis, is crucial in this context. We use ideas from graph theory to derive the percentage that is contributed by each direct treatment effect. We start with the ‘projection’ matrix in a two-step network meta-analysis model, called the H matrix, which is analogous to the hat matrix in a linear regression model. We develop a method to translate H entries to percentage contributions based on the observation that the rows of H can be interpreted as flow networks, where a stream is defined as the composition of a path and its associated flow. We present an algorithm that identifies the flow of evidence in each path and decomposes it into direct comparisons. To illustrate the methodology, we use two published networks of interventions. The first compares no treatment, quinolone antibiotics, non-quinolone antibiotics and antiseptics for underlying eardrum perforations and the second compares 14 antimanic drugs. We believe that this approach is a useful and novel addition to network meta-analysis methodology, which allows the consistent derivation of the percentage contributions of direct evidence from individual studies to network treatment effects
    • 

    corecore