2,346 research outputs found
Calculating the Expected Value of Sample Information using Efficient Nested Monte Carlo: A Tutorial
Objective: The Expected Value of Sample Information (EVSI) quantifies the
economic benefit of reducing uncertainty in a health economic model by
collecting additional information. This has the potential to improve the
allocation of research budgets. Despite this, practical EVSI evaluations are
limited, partly due to the computational cost of estimating this value using
the "gold-standard" nested simulation methods. Recently, however, Heath et al
developed an estimation procedure that reduces the number of simulations
required for this "gold-standard" calculation. Up to this point, this new
method has been presented in purely technical terms. Study Design: This study
presents the practical application of this new method to aid its
implementation. We use a worked example to illustrate the key steps of the EVSI
estimation procedure before discussing its optimal implementation using a
practical health economic model. Methods: The worked example is based on a
three parameter linear health economic model. The more realistic model
evaluates the cost-effectiveness of a new chemotherapy treatment which aims to
reduce the number of side effects experienced by patients. We use a Markov
Model structure to evaluate the health economic profile of experiencing side
effects. Results: This EVSI estimation method offers accurate estimation within
a feasible computation time, seconds compared to days, even for more complex
model structures. The EVSI estimation is more accurate if a greater number of
nested samples are used, even for a fixed computational cost. Conclusions: This
new method reduces the computational cost of estimating the EVSI by nested
simulation
Methods for Population Adjustment with Limited Access to Individual Patient Data: A Review and Simulation Study
Population-adjusted indirect comparisons estimate treatment effects when
access to individual patient data is limited and there are cross-trial
differences in effect modifiers. Popular methods include matching-adjusted
indirect comparison (MAIC) and simulated treatment comparison (STC). There is
limited formal evaluation of these methods and whether they can be used to
accurately compare treatments. Thus, we undertake a comprehensive simulation
study to compare standard unadjusted indirect comparisons, MAIC and STC across
162 scenarios. This simulation study assumes that the trials are investigating
survival outcomes and measure continuous covariates, with the log hazard ratio
as the measure of effect. MAIC yields unbiased treatment effect estimates under
no failures of assumptions. The typical usage of STC produces bias because it
targets a conditional treatment effect where the target estimand should be a
marginal treatment effect. The incompatibility of estimates in the indirect
comparison leads to bias as the measure of effect is non-collapsible. Standard
indirect comparisons are systematically biased, particularly under stronger
covariate imbalance and interaction effects. Standard errors and coverage rates
are often valid in MAIC but the robust sandwich variance estimator
underestimates variability where effective sample sizes are small. Interval
estimates for the standard indirect comparison are too narrow and STC suffers
from bias-induced undercoverage. MAIC provides the most accurate estimates and,
with lower degrees of covariate overlap, its bias reduction outweighs the loss
in effective sample size and precision under no failures of assumptions. An
important future objective is the development of an alternative formulation to
STC that targets a marginal treatment effect.Comment: 73 pages (34 are supplementary appendices and references), 8 figures,
2 tables. Full article (following Round 4 of minor revisions). arXiv admin
note: text overlap with arXiv:2008.0595
Arab Spring Book Exhibit Bibliography and Call Numbers
An exhibit of books about Arab Spring was held in the William T. Young Library from Oct. 2014 through Feb. 2015 in celebration of the Year of the Middle Year at the University of Kentucky. An annotated bibliography for the exhibit is available by clicking the Download button on the right.
Click here to view the online guide about the book exhibit
BCEA: An R Package for Cost-Effectiveness Analysis
We describe in detail how to perform health economic cost-effectiveness
analyses (CEA) using the R package (Bayesian Cost-Effectiveness
Analysis). CEA consist of analytic approaches for combining costs and health
consequences of intervention(s). These help to understand how much an
intervention may cost (per unit of health gained) compared to an alternative
intervention, such as a control or status quo. For resource allocation, a
decision maker may wish to know if an intervention is cost saving, and if not
then how much more would it cost to implement it compared to a less effective
intervention.
Current guidance for cost-effectiveness analyses advocates the quantification
of uncertainties which can be represented by random samples obtained from a
probability sensitivity analysis or, more efficiently, a Bayesian model.
can be used to post-process the sampled costs and health
impacts to perform advanced analyses producing standardised and highly
customisable outputs. We present the features of the package, including its
many functions and their practical application. is valuable for
statisticians and practitioners working in the field of health economic
modelling wanting to simplify and standardise their workflow, for example in
the preparation of dossiers in support of marketing authorisation, or academic
and scientific publications
Effect modification in anchored indirect treatment comparisons: Comments on "Matching-adjusted indirect comparisons: Application to time-to-event data"
This commentary regards a recent simulation study conducted by Aouni,
Gaudel-Dedieu and Sebastien, evaluating the performance of different versions
of matching-adjusted indirect comparison (MAIC) in an anchored scenario with a
common comparator. The simulation study uses survival outcomes and the Cox
proportional hazards regression as the outcome model. It concludes that using
the LASSO for variable selection is preferable to balancing a maximal set of
covariates. However, there are no treatment effect modifiers in imbalance in
the study. The LASSO is more efficient because it selects a subset of the
maximal set of covariates but there are no cross-study imbalances in effect
modifiers inducing bias. We highlight the following points: (1) in the anchored
setting, MAIC is necessary where there are cross-trial imbalances in effect
modifiers; (2) the standard indirect comparison provides greater precision and
accuracy than MAIC if there are no effect modifiers in imbalance; (3) while the
target estimand of the simulation study is a conditional treatment effect, MAIC
targets a marginal or population-average treatment effect; (4) in MAIC,
variable selection is a problem of low dimensionality and sparsity-inducing
methods like the LASSO may be problematic. Finally, data-driven approaches do
not obviate the necessity for subject matter knowledge when selecting effect
modifiers. R code is provided in the Appendix to replicate the analyses and
illustrate our points.Comment: 14 pages, minor changes after conditional acceptance by Statistics in
Medicine. This is a response to `Matching-adjusted indirect comparisons:
Application to time-to-event data' by Aouni, Gaudel-Dedieu and Sebastien
(2020
Parametric G-computation for Compatible Indirect Treatment Comparisons with Limited Individual Patient Data
Population adjustment methods such as matching-adjusted indirect comparison (MAIC) are increasingly used to compare marginal treatment effects when there are cross-trial differences in effect modifiers and limited patient-level data. MAIC is based on propensity score weighting, which is sensitive to poor covariate overlap and cannot extrapolate beyond the observed covariate space. Current outcome regression-based alternatives can extrapolate but target a conditional treatment effect that is incompatible in the indirect comparison. When adjusting for covariates, one must integrate or average the conditional estimate over the relevant population to recover a compatible marginal treatment effect. We propose a marginalization method based parametric G-computation that can be easily applied where the outcome regression is a generalized linear model or a Cox model. The approach views the covariate adjustment regression as a nuisance model and separates its estimation from the evaluation of the marginal treatment effect of interest. The method can accommodate a Bayesian statistical framework, which naturally integrates the analysis into a probabilistic framework. A simulation study provides proof-of-principle and benchmarks the method's performance against MAIC and the conventional outcome regression. Parametric G-computation achieves more precise and more accurate estimates than MAIC, particularly when covariate overlap is poor, and yields unbiased marginal treatment effect estimates under no failures of assumptions. Furthermore, the marginalized regression-adjusted estimates provide greater precision and accuracy than the conditional estimates produced by the conventional outcome regression, which are systematically biased because the measure of effect is non-collapsible. This article is protected by copyright. All rights reserved
Conflating marginal and conditional treatment effects: Comments on 'Assessing the performance of population adjustment methods for anchored indirect comparisons: A simulation study'
In this commentary, we highlight the importance of: (1) carefully considering
and clarifying whether a marginal or conditional treatment effect is of
interest in a population-adjusted indirect treatment comparison; and (2)
developing distinct methodologies for estimating the different measures of
effect. The appropriateness of each methodology depends on the preferred target
of inference.Comment: 6 pages, submitted to Statistics in Medicine. Response to `Assessing
the performance of population adjustment methods for anchored indirect
comparisons: A simulation study' by Phillippo, Dias, Ades and Welton,
published in Statistics in Medicine (2020). Updated after Ph.D. proposal
defense/transfer viva comment
Marginalization of Regression-Adjusted Treatment Effects in Indirect Comparisons with Limited Patient-Level Data
Population adjustment methods such as matching-adjusted indirect comparison
(MAIC) are increasingly used to compare marginal treatment effects when there
are cross-trial differences in effect modifiers and limited patient-level data.
MAIC is sensitive to poor covariate overlap and cannot extrapolate beyond the
observed covariate space. Current outcome regression-based alternatives can
extrapolate but target a conditional treatment effect that is incompatible in
the indirect comparison. When adjusting for covariates, one must integrate or
average the conditional estimate over the population of interest to recover a
compatible marginal treatment effect. We propose a marginalization method based
on parametric G-computation that can be easily applied where the outcome
regression is a generalized linear model or a Cox model. In addition, we
introduce a novel general-purpose method based on multiple imputation, which we
term multiple imputation marginalization (MIM) and is applicable to a wide
range of models. Both methods can accommodate a Bayesian statistical framework,
which naturally integrates the analysis into a probabilistic framework. A
simulation study provides proof-of-principle for the methods and benchmarks
their performance against MAIC and the conventional outcome regression. The
marginalized outcome regression approaches achieve more precise and more
accurate estimates than MAIC, particularly when covariate overlap is poor, and
yield unbiased marginal treatment effect estimates under no failures of
assumptions. Furthermore, the marginalized covariate-adjusted estimates provide
greater precision and accuracy than the conditional estimates produced by the
conventional outcome regression, which are systematically biased because the
measure of effect is non-collapsible.Comment: 86 pages (28 of supplementary appendices and references), 5 figures.
Updated after PhD viva comments. arXiv admin note: text overlap with
arXiv:2004.1480
- …