259 research outputs found
Extending Treatment Networks in Health Technology Assessment: How Far Should We Go?
AbstractBackgroundNetwork meta-analysis may require substantially more resources than does a standard systematic review. One frequently asked question is “how far should I extend the network and which treatments should I include?”ObjectiveTo explore the increase in precision from including additional evidence.MethodsWe assessed the benefit of extending treatment networks in terms of precision of effect estimates and examined how this depends on network structure and relative strength of additional evidence. We introduced a “star”-shaped network. Network complexity is increased by adding more evidence connecting treatments under five evidence scenarios. We also examined the impact of heterogeneity and absence of evidence facilitating a “first-order” indirect comparison.ResultsIn all scenarios, extending the network increased the precision of the A versus B treatment effect. Under a fixed-effect model, the increase in precision was modest when the existing direct A versus B evidence was already strong and was substantial when the direct evidence was weak. Under a random-effects model, the gain in precision was lower when heterogeneity was high. When evidence is available for all “first-order” indirect comparisons, including second-order evidence has limited benefit for the precision of the A versus B estimate. This is interpreted as a “ceiling effect.”ConclusionsIncluding additional evidence increases the precision of a “focal” treatment comparison of interest. Once the comparison of interest is connected to all others via “first-order” indirect evidence, there is no additional benefit in including higher order comparisons. This conclusion is generalizable to any number of treatment comparisons, which would then all be considered “focal.” The increase in precision is modest when direct evidence is already strong, or there is a high degree of heterogeneity
Using Parameter Constraints to Choose State Structures in Cost-Effectiveness Modelling.
BACKGROUND: This article addresses the choice of state structure in a cost-effectiveness multi-state model. Key model outputs, such as treatment recommendations and prioritisation of future research, may be sensitive to state structure choice. For example, it may be uncertain whether to consider similar disease severities or similar clinical events as the same state or as separate states. Standard statistical methods for comparing models require a common reference dataset but merging states in a model aggregates the data, rendering these methods invalid. METHODS: We propose a method that involves re-expressing a model with merged states as a model on the larger state space in which particular transition probabilities, costs and utilities are constrained to be equal between states. This produces a model that gives identical estimates of cost effectiveness to the model with merged states, while leaving the data unchanged. The comparison of state structures can be achieved by comparing maximised likelihoods or information criteria between constrained and unconstrained models. We can thus test whether the costs and/or health consequences for a patient in two states are the same, and hence if the states can be merged. We note that different structures can be used for rates, costs and utilities, as appropriate. APPLICATION: We illustrate our method with applications to two recent models evaluating the cost effectiveness of prescribing anti-depressant medications by depression severity and the cost effectiveness of diagnostic tests for coronary artery disease. CONCLUSIONS: State structures in cost-effectiveness models can be compared using standard methods to compare constrained and unconstrained models
Value of Information for Clinical Trial Design: The Importance of Considering All Relevant Comparators
Value of Information (VOI) analyses calculate the economic value that could be generated by obtaining further information to reduce uncertainty in a health economic decision model. VOI has been suggested as a tool for research prioritisation and trial design as it can highlight economically valuable avenues for future research. Recent methodological advances have made it increasingly feasible to use VOI in practice for research; however, there are critical differences between the VOI approach and the standard methods used to design research studies such as clinical trials. We aimed to highlight key differences between the research design approach based on VOI and standard clinical trial design methods, in particular the importance of considering the full decision context. We present two hypothetical examples to demonstrate that VOI methods are only accurate when (1) all feasible comparators are included in the decision model when designing research, and (2) all comparators are retained in the decision model once the data have been collected and a final treatment recommendation is made. Omitting comparators from either the design or analysis phase of research when using VOI methods can lead to incorrect trial designs and/or treatment recommendations. Overall, we conclude that incorrectly specifying the health economic model by ignoring potential comparators can lead to misleading VOI results and potentially waste scarce research resources
Equity in access to total joint replacement of the hip and knee in England: cross sectional study
Objective To explore geographical and sociodemographic factors associated with variation in equity in access to total hip and knee replacement surgery
- …