13 research outputs found

    A threshold analysis assessed the credibility of conclusions from network meta-analysis

    Get PDF
    Objective: To assess the reliability of treatment recommendations based on network meta-analysis (NMA)Study design: We consider evidence in an NMA to be potentially biased. Taking each pair-wise contrast in turn we use a structured series of threshold analyses to ask: (a) “How large would the bias in this evidence-base have to be before it changed our decision?” and (b) “If the decision changed, what is the new recommendation?” We illustrate the method via two NMAs in which a GRADE assessment for NMAs has been implemented: weight-loss and osteoporosis.Results. Four of the weight-loss NMA estimates were assessed as “low” and 6 as “moderate” quality by GRADE; for osteoporosis 6 were “low”, 9 “moderate” and 1 “high”. The threshold analysis suggests plausible bias in 3 of 10 estimates in the weight-loss network could have changed the treatment recommendation. For osteoporosis plausible bias in 6 of 16 estimates could change the recommendation. There was no relation between plausible bias changing a treatment recommendation and the original GRADE assessments.Conclusions. Reliability judgements on individual NMA contrasts do not help decision makers understand whether a treatment recommendation is reliable. Threshold analysis reveals whether the final recommendation is robust against plausible degrees of bias in the data

    The Physics of the B Factories

    Get PDF

    Evidence Synthesis, Parameter Correlation, and Probabilistic Sensitivity Analysis

    No full text
    Over the last decade or so, there have been many developments in methods to handle uncertainty in cost-effectiveness studies. In decision modelling, it is widely accepted that there needs to be an assessment of how sensitive the decision is to uncertainty in parameter values. The rationale for probabilistic sensitivity analysis (PSA) is primarily based on a consideration of the needs of decision makers in assessing the consequences of decision uncertainty. In this paper, we highlight some further compelling reasons for adopting probabilistic methods for decision modelling and sensitivity analysis, and specifically for adopting simulation from a Bayesian posterior distribution. Our reasoning is as follows. Firstly, cost-effectiveness analyses need to be based on all the available evidence, not a selected subset, and the uncertainties in the data need to be propagated through the model in order to provide a correct analysis of the uncertainties in the decision. In many - perhaps most - cases the evidence structure requires a statistical analysis that inevitably induces correlations between parameters. Deterministic sensitivity analysis requires that models are run with parameters fixed at extreme values, but where parameter correlation exists it is not possible to identify sets of parameter values that can be considered extreme in a meaningful sense. However, a correct probabilistic analysis can be readily achieved by Monte Carlo sampling from the joint posterior distribution of parameters. In this paper, we review some evidence structures commonly occurring in decision models, where analyses that correctly reflect the uncertainty in the data induce correlations between parameters. Frequently, this is because the evidence base includes information on functions of several parameters. It follows that, if health technology assessments are to be based on a correct analysis of all available data, then probabilistic methods must be used both for sensitivity analysis and for estimation of expected costs and benefits

    Estimating the expected value of partial perfect information: a review of methods

    No full text
    Background Value of information analysis provides a framework for the analysis of uncertainty within economic analysis by focussing on the value of obtaining further information to reduce uncertainty. The mathematical definition of the expected value of perfect information (EVPI) is fixed, though there are different methods in the literature for its estimation. In this paper these methods are explored and compared. Methods Analysis was conducted using a disease model for Parkinson’s disease. Five methods for estimating partial EVPIs (EVPPIs) were used: a single Monte Carlo simulation (MCS) method, the unit normal loss integral (UNLI) method, a two-stage method using MCS, a two-stage method using MCS and quadrature and a difference method requiring two MCS. EVPPI was estimated for each individual parameter in the model as well as for three groups of parameters (transition probabilities, costs and utilities). Results Using 5,000 replications, four methods returned similar results for EVPPIs. With 5 million replications, results were near identical. However, the difference method repeatedly gave estimates substantially different to the other methods. Conclusions The difference method is not rooted in the mathematical definition of EVPI and is clearly an inappropriate method for estimating EVPPI. The single MCS and UNLI methods were the least complex methods to use, but are restricted in their appropriateness. The two-stage MCS and quadrature-based methods are complex and time consuming. Thus, where appropriate, EVPPI should be estimated using either the single MCS or UNLI method. However, where neither of these methods is appropriate, either of the two-stage MCS and quadrature methods should be used

    Preventive strategies for group B streptococcal and other bacterial infections in early infancy: cost effectiveness and value of information analyses

    Get PDF
    Objective: To determine the cost effectiveness of strategies for preventing neonatal infection with group B streptococci and other bacteria in the UK and the value of further information from research. Design: Use of a decision model to compare the cost effectiveness of prenatal testing for group B streptococcal infection (by polymerase chain reaction or culture), prepartum antibiotic treatment (intravenous penicillin or oral erythromycin), and vaccination during pregnancy (not yet available) for serious bacterial infection in early infancy across 12 maternal risk groups. Model parameters were estimated using multi-parameter evidence synthesis to incorporate all relevant data inputs. Data sources: 32 systematic reviews were conducted: 14 integrated results from published studies, 24 involved analyses of primary datasets, and five included expert opinion. Main outcomes measures: Healthcare costs per quality adjusted life year (QALY) gained. Results: Current best practice (to treat only high risk women without prior testing for infection) and universal testing by culture or polymerase chain reaction were not cost effective options. Immediate extension of current best practice to treat all women with preterm and high risk term deliveries without testing (11% treated) would result in substantial net benefits. Currently, addition of culture testing for low risk term women, while treating all preterm and high risk term women, would be the most cost effective option (21% treated). If available in the future, vaccination combined with treating all preterm and high risk term women and no testing for low risk women would probably be marginally more cost effective and would limit antibiotic exposure to 11% of women. The value of information is highest (67m) pound if vaccination is included as an option. Conclusions: Extension of current best practice to treat all women with preterm and high risk term deliveries is readily achievable and would be beneficial. The choice between adding culture testing for tow risk women or vaccination for all should be informed by further research. Trials to evaluate vaccine efficacy should be prioritised

    Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information

    Get PDF
    Background: Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. Methods: We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. Results: The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Conclusions: Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models
    corecore