17,798 research outputs found

    Cost-effectiveness analysis in R using a multi-state modelling survival analysis framework: a tutorial

    Get PDF
    This tutorial provides a step-by-step guide to performing cost-effectiveness analysis using a multi-state modelling approach. Alongside the tutorial we provide easy-to-use functions in the statistics package R. We argue this multi-state modelling approach using a package such as R has advantages over approaches where models are built in a spreadsheet package. In particular, using a syntax-based approach means there is a written record of what was done and the calculations are transparent. Reproducing the analysis is straightforward as the syntax just needs to be run again. The approach can be thought of as an alternative way to build a Markov decision analytic model, which also has the option to use a state-arrival extended approach if the Markov property does not hold. In the state-arrival extended multi-state model a covariate that represents patients’ history is included allowing the Markov property to be tested. We illustrate the building of multi-state survival models, making predictions from the models and assessing fits. We then proceed to perform a cost-effectiveness analysis including deterministic and probabilistic sensitivity analyses. Finally, we show how to create two common methods of visualising the results, namely cost-effectiveness planes and cost-effectiveness acceptability curves. The analysis is implemented entirely within R. It is based on adaptions to functions in the existing R package mstate, to accommodate parametric multi-state modelling which facilitates extrapolation of survival curves

    Methods for Population Adjustment with Limited Access to Individual Patient Data: A Review and Simulation Study

    Get PDF
    Population-adjusted indirect comparisons estimate treatment effects when access to individual patient data is limited and there are cross-trial differences in effect modifiers. Popular methods include matching-adjusted indirect comparison (MAIC) and simulated treatment comparison (STC). There is limited formal evaluation of these methods and whether they can be used to accurately compare treatments. Thus, we undertake a comprehensive simulation study to compare standard unadjusted indirect comparisons, MAIC and STC across 162 scenarios. This simulation study assumes that the trials are investigating survival outcomes and measure continuous covariates, with the log hazard ratio as the measure of effect. MAIC yields unbiased treatment effect estimates under no failures of assumptions. The typical usage of STC produces bias because it targets a conditional treatment effect where the target estimand should be a marginal treatment effect. The incompatibility of estimates in the indirect comparison leads to bias as the measure of effect is non-collapsible. Standard indirect comparisons are systematically biased, particularly under stronger covariate imbalance and interaction effects. Standard errors and coverage rates are often valid in MAIC but the robust sandwich variance estimator underestimates variability where effective sample sizes are small. Interval estimates for the standard indirect comparison are too narrow and STC suffers from bias-induced undercoverage. MAIC provides the most accurate estimates and, with lower degrees of covariate overlap, its bias reduction outweighs the loss in effective sample size and precision under no failures of assumptions. An important future objective is the development of an alternative formulation to STC that targets a marginal treatment effect.Comment: 73 pages (34 are supplementary appendices and references), 8 figures, 2 tables. Full article (following Round 4 of minor revisions). arXiv admin note: text overlap with arXiv:2008.0595

    A General Framework for Updating Belief Distributions

    Full text link
    We propose a framework for general Bayesian inference. We argue that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered under the special case of using self information loss. Modern application areas make it is increasingly challenging for Bayesians to attempt to model the true data generating mechanism. Moreover, when the object of interest is low dimensional, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. More importantly, there are settings where the parameter of interest does not directly index a family of density functions and thus the Bayesian approach to learning about such parameters is currently regarded as problematic. Our proposed framework uses loss-functions to connect information in the data to functionals of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is known, yet provides coherent subjective inference in much more general settings. Connections to other inference frameworks are highlighted.Comment: This is the pre-peer reviewed version of the article "A General Framework for Updating Belief Distributions", which has been accepted for publication in the Journal of Statistical Society - Series B. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archivin

    Deductive semiparametric estimation in Double-Sampling Designs with application to PEPFAR

    Full text link
    Non-ignorable dropout is common in studies with long follow-up time, and it can bias study results unless handled carefully. A double-sampling design allocates additional resources to pursue a subsample of the dropouts and find out their outcomes, which can address potential biases due to non-ignorable dropout. It is desirable to construct semiparametric estimators for the double-sampling design because of their robustness properties. However, obtaining such semiparametric estimators remains a challenge due to the requirement of the analytic form of the efficient influence function (EIF), the derivation of which can be ad hoc and difficult for the double-sampling design. Recent work has shown how the derivation of EIF can be made deductive and computerizable using the functional derivative representation of the EIF in nonparametric models. This approach, however, requires deriving the mixture of a continuous distribution and a point mass, which can itself be challenging for complicated problems such as the double-sampling design. We propose semiparametric estimators for the survival probability in double-sampling designs by generalizing the deductive and computerizable estimation approach. In particular, we propose to build the semiparametric estimators based on a discretized support structure, which approximates the possibly continuous observed data distribution and circumvents the derivation of the mixture distribution. Our approach is deductive in the sense that it is expected to produce semiparametric locally efficient estimators within finite steps without knowledge of the EIF. We apply the proposed estimators to estimating the mortality rate in a double-sampling design component of the President's Emergency Plan for AIDS Relief (PEPFAR) program. We evaluate the impact of double-sampling selection criteria on the mortality rate estimates
    • …
    corecore