201 research outputs found
Value of Information for Clinical Trial Design: The Importance of Considering All Relevant Comparators
Value of Information (VOI) analyses calculate the economic value that could be generated by obtaining further information to reduce uncertainty in a health economic decision model. VOI has been suggested as a tool for research prioritisation and trial design as it can highlight economically valuable avenues for future research. Recent methodological advances have made it increasingly feasible to use VOI in practice for research; however, there are critical differences between the VOI approach and the standard methods used to design research studies such as clinical trials. We aimed to highlight key differences between the research design approach based on VOI and standard clinical trial design methods, in particular the importance of considering the full decision context. We present two hypothetical examples to demonstrate that VOI methods are only accurate when (1) all feasible comparators are included in the decision model when designing research, and (2) all comparators are retained in the decision model once the data have been collected and a final treatment recommendation is made. Omitting comparators from either the design or analysis phase of research when using VOI methods can lead to incorrect trial designs and/or treatment recommendations. Overall, we conclude that incorrectly specifying the health economic model by ignoring potential comparators can lead to misleading VOI results and potentially waste scarce research resources
Evidence Synthesis for Decision Making 6:Embedding Evidence Synthesis in Probabilistic Cost-effectiveness Analysis
When multiple parameters are estimated from the same synthesis model, it is likely that correlations will be induced between them. Network meta-analysis (mixed treatment comparisons) is one example where such correlations occur, along with meta-regression and syntheses involving multiple related outcomes. These correlations may affect the uncertainty in incremental net benefit when treatment options are compared in a probabilistic decision model, and it is therefore essential that methods are adopted that propagate the joint parameter uncertainty, including correlation structure, through the cost-effectiveness model. This tutorial paper sets out 4 generic approaches to evidence synthesis that are compatible with probabilistic cost-effectiveness analysis. The first is evidence synthesis by Bayesian posterior estimation and posterior sampling where other parameters of the cost-effectiveness model can be incorporated into the same software platform. Bayesian Markov chain Monte Carlo simulation methods with WinBUGS software are the most popular choice for this option. A second possibility is to conduct evidence synthesis by Bayesian posterior estimation and then export the posterior samples to another package where other parameters are generated and the cost-effectiveness model is evaluated. Frequentist methods of parameter estimation followed by forward Monte Carlo simulation from the maximum likelihood estimates and their variance-covariance matrix represent’a third approach. A fourth option is bootstrap resampling—a frequentist simulation approach to parameter uncertainty. This tutorial paper also provides guidance on how to identify situations in which no correlations exist and therefore simpler approaches can be adopted. Software suitable for transferring data between different packages, and software that provides a user-friendly interface for integrated software platforms, offering investigators a flexible way of examining alternative scenarios, are reviewed
Evidence Synthesis for Decision Making 2:A Generalized Linear Modeling Framework for Pairwise and Network Meta-analysis of Randomized Controlled Trials
We set out a generalized linear model framework for the synthesis of data from randomized controlled trials. A common model is described, taking the form of a linear regression for both fixed and random effects synthesis, which can be implemented with normal, binomial, Poisson, and multinomial data. The familiar logistic model for meta-analysis with binomial data is a generalized linear model with a logit link function, which is appropriate for probability outcomes. The same linear regression framework can be applied to continuous outcomes, rate models, competing risks, or ordered category outcomes by using other link functions, such as identity, log, complementary log-log, and probit link functions. The common core model for the linear predictor can be applied to pairwise meta-analysis, indirect comparisons, synthesis of multiarm trials, and mixed treatment comparisons, also known as network meta-analysis, without distinction. We take a Bayesian approach to estimation and provide WinBUGS program code for a Bayesian analysis using Markov chain Monte Carlo simulation. An advantage of this approach is that it is straightforward to extend to shared parameter models where different randomized controlled trials report outcomes in different formats but from a common underlying model. Use of the generalized linear model framework allows us to present a unified account of how models can be compared using the deviance information criterion and how goodness of fit can be assessed using the residual deviance. The approach is illustrated through a range of worked examples for commonly encountered evidence formats
Evidence Synthesis for Decision Making 5:The Baseline Natural History Model
Most cost-effectiveness analyses consist of a baseline model that represents the absolute natural history under a standard treatment in a comparator set and a model for relative treatment effects. We review synthesis issues that arise on the construction of the baseline natural history model. We cover both the absolute response to treatment on the outcome measures on which comparative effectiveness is defined and the other elements of the natural history model, usually “downstream” of the shorter-term effects reported in trials. We recommend that the same framework be used to model the absolute effects of a “standard treatment” or placebo comparator as that used for synthesis of relative treatment effects and that the baseline model is constructed independently from the model for relative treatment effects, to ensure that the latter are not affected by assumptions made about the baseline. However, simultaneous modeling of baseline and treatment effects could have some advantages when evidence is very sparse or when other research or study designs give strong reasons for believing in a particular baseline model. The predictive distribution, rather than the fixed effect or random effects mean, should be used to represent the baseline to reflect the observed variation in baseline rates. Joint modeling of multiple baseline outcomes based on data from trials or combinations of trial and observational data is recommended where possible, as this is likely to make better use of available evidence, produce more robust results, and ensure that the model is internally coherent
Equity in access to total joint replacement of the hip and knee in England: cross sectional study
Objective To explore geographical and sociodemographic factors associated with variation in equity in access to total hip and knee replacement surgery
- …