39 research outputs found

    Long-term Comparative Effectiveness of Rheumatoid Arthritis Treatment Strategies

    Get PDF
    University of Minnesota Ph.D. dissertation. August 2013. Major: Health Services Research, Policy and Administration. Advisor: Karen Kuntz. 1 computer file (PDF); xx, 110 pages.Rheumatoid arthritis (RA) is a chronic debilitating disease characterized by progressive joint damage, reduced quality of life, loss of productivity and premature death. It affects 1% of the adult US population, and is one of the most demanding diseases on our healthcare resources. Biologic disease modifiers are new drugs that provide hope to improve the course of RA; however, biologics are among the most expensive specialty drugs. Although the treatment costs of RA have recently increased with the introduction of biologics, most of the economic and societal impacts are due to consequences of RA rather than direct treatment costs. Thus, the cost-effectiveness of biologics in RA is of high priority as recognized by many agencies including the National Institute of Health. This thesis focuses on three limitations of the current cost-effectiveness analyses (CEA) of biologics in RA. First, Most CEAs are based on randomized clinical trials (RCT) that are rarely applicable to real-life clinical practice. This thesis examines the long-term comparative clinical- and cost-effectiveness of biologics using clinical practice data from a large registry of RA patients (The National Data-Bank of Rheumatic Diseases). Second, we lack a meta-analytical approach specific to CEAs, and previous tools are deemed inappropriate. This thesis presents a novel approach of meta-analysis specific to CEAs. Using this tool we examine if prior CEAs of biologics in RA are consistent. Third, due to the biologics' high costs, RA treatment guidelines often recommend biologics as second line agents after nonbiologics. However, early aggressive treatment is crucial to avoid permanent joint damage. In this thesis we use Markov decision processes (MDP) as an innovative approach to identify the optimal timing of biologics in RA. The results from this analysis have significant policy, clinical and methodological implications. This work provides important insights into the comparative effectiveness of biologics in RA from a US societal perspective, which can influence health policy and medical insurance coverage decisions. Methodologically, the proposed meta-analytical approach can be applied to other conditions, and have the potential to reconcile the inconsistencies in published CEAs and improve the quality of future studies

    Estimating the EVSI with Gaussian Approximations and Spline-Based Series Methods

    Full text link
    Background. The Expected Value of Sample Information (EVSI) measures the expected benefits that could be obtained by collecting additional data. Estimating EVSI using the traditional nested Monte Carlo method is computationally expensive but the recently developed Gaussian approximation (GA) approach can efficiently estimate EVSI across different sample sizes. However, the conventional GA may result in biased EVSI estimates if the decision models are highly nonlinear. This bias may lead to suboptimal study designs when GA is used to optimize the value of different studies. Therefore, we extend the conventional GA approach to improve its performance for nonlinear decision models. Methods. Our method provides accurate EVSI estimates by approximating the conditional benefit based on two steps. First, a Taylor series approximation is applied to estimate the conditional benefit as a function of the conditional moments of the parameters of interest using a spline, which is fitted to the samples of the parameters and the corresponding benefits. Next, the conditional moments of parameters are approximated by the conventional GA and Fisher information. The proposed approach is applied to several data collection exercises involving non-Gaussian parameters and nonlinear decision models. Its performance is compared with the nested Monte Carlo method, the conventional GA approach, and the nonparametric regression-based method for EVSI calculation. Results. The proposed approach provides accurate EVSI estimates across different sample sizes when the parameters of interest are non-Gaussian and the decision models are nonlinear. The computational cost of the proposed method is similar to other novel methods. Conclusions. The proposed approach can estimate EVSI across sample sizes accurately and efficiently, which may support researchers in determining an economically optimal study design using EVSI.Comment: 11 pages, 2 figures, presented at 44th Medical Decision Making Annual North American Meetin

    Calculating the Expected Value of Sample Information in Practice: Considerations from Three Case Studies

    Full text link
    Investing efficiently in future research to improve policy decisions is an important goal. Expected Value of Sample Information (EVSI) can be used to select the specific design and sample size of a proposed study by assessing the benefit of a range of different studies. Estimating EVSI with the standard nested Monte Carlo algorithm has a notoriously high computational burden, especially when using a complex decision model or when optimizing over study sample sizes and designs. Therefore, a number of more efficient EVSI approximation methods have been developed. However, these approximation methods have not been compared and therefore their relative advantages and disadvantages are not clear. A consortium of EVSI researchers, including the developers of several approximation methods, compared four EVSI methods using three previously published health economic models. The examples were chosen to represent a range of real-world contexts, including situations with multiple study outcomes, missing data, and data from an observational rather than a randomized study. The computational speed and accuracy of each method were compared, and the relative advantages and implementation challenges of the methods were highlighted. In each example, the approximation methods took minutes or hours to achieve reasonably accurate EVSI estimates, whereas the traditional Monte Carlo method took weeks. Specific methods are particularly suited to problems where we wish to compare multiple proposed sample sizes, when the proposed sample size is large, or when the health economic model is computationally expensive. All the evaluated methods gave estimates similar to those given by traditional Monte Carlo, suggesting that EVSI can now be efficiently computed with confidence in realistic examples.Comment: 11 pages, 3 figure

    Microsimulation Modeling for Health Decision Sciences Using R: A Tutorial

    Get PDF
    Microsimulation models are becoming increasingly common in the field of decision modeling for health. Because microsimulation models are computationally more demanding than traditional Markov cohort models, the use of computer programming languages in their development has become more common. R is a programming language that has gained recognition within the field of decision modeling. It has the capacity to perform microsimulation models more efficiently than software commonly used for decision modeling, incorporate statistical analyses within decision models, and produce more transparent models and reproducible results. However, no clear guidance for the implementation of microsimulation models in R exists. In this tutorial, we provide a step-by-step guide to build microsimulation models in R and illustrate the use of this guide on a simple, but transferable, hypothetical decision problem. We guide the reader through the necessary steps and provide generic R code that is flexible and can be adapted for other models. We also show how this code can be extended to address more complex model structures and provide an efficient microsimulation approach that relies on vectorization solutions

    A Need for Change! A Coding Framework for Improving Transparency in Decision Modeling

    Get PDF
    The use of open-source programming languages, such as R, in health decision sciences is growing and has the potential to facilitate model transparency, reproducibility, and shareability. However, realizing this potential can be challenging. Models are complex and primarily built to answer a research question, with model sharing and transparency relegated to being secondary goals. Consequently, code is often neither well documented nor systematically organized in a comprehensible and shareable approach. Moreover, many decision modelers are not formally trained in computer programming and may lack good coding practices, further compounding the problem of model transparency. To address these challenges, we propose a high-level framework for model-based decision and cost-effectiveness analyses (CEA) in R. The proposed framework consists of a conceptual, modular structure and coding recommendations for the implementation of model-based decision analyses in R. This framework defines a set of common decision model elements divided into five components: (1) model inputs, (2) decision model implementation, (3) model calibration, (4) model validation, and (5) analysis. The first four components form the model development phase. The analysis component is the application of the fully developed decision model to answer the policy or the research question of interest, assess decision uncertainty, and/or to determine the value of future research through value of information (VOI) analysis. In this framework, we also make recommendations for good coding practices specific to decision modeling, such as file organization and variable naming conventions. We showcase the framework through a fully functional, testbed decision model, which is hosted on GitHub for free download and easy adaptation to other applications. The use of this framework in decision modeling will improve code readability and model sharing, paving the way to an ideal, open-source world

    A Multidimensional Array Representation of State-Transition Model Dynamics

    Get PDF
    Cost-effectiveness analyses often rely on cohort state-transition models (cSTMs). The cohort trace is the primary outcome of cSTMs, which captures the proportion of the cohort in each health state over time (state occupancy). However, the cohort trace is an aggregated measure that does not capture information about the specific transitions among health states (transition dynamics). In practice, these transition dynamics are crucial in many applications, such as incorporating transition rewards or computing various epidemiological outcomes that could be used for model calibration and validation (e.g., disease incidence and lifetime risk). In this article, we propose an alternative approach to compute and store cSTMs outcomes that capture both state occupancy and transition dynamics. This approach produces a multidimensional array from which both the state occupancy and the transition dynamics can be recovered. We highlight the advantages of the multidimensional array over the traditional cohort trace and provide potential applications of the proposed approach with an example coded in R to facilitate the implementation of our method

    An Introductory Tutorial on Cohort State-Transition Models in R Using a Cost-Effectiveness Analysis Example

    No full text
    Decision models can combine information from different sources to simulate the long-term consequences of alternative strategies in the presence of uncertainty. A cohort state-transition model (cSTM) is a decision model commonly used in medical decision making to simulate the transitions of a hypothetical cohort among various health states over time. This tutorial focuses on time-independent cSTM, in which transition probabilities among health states remain constant over time. We implement time-independent cSTM in R, an open-source mathematical and statistical programming language. We illustrate time-independent cSTMs using a previously published decision model, calculate costs and effectiveness outcomes, and conduct a cost-effectiveness analysis of multiple strategies, including a probabilistic sensitivity analysis. We provide open-source code in R to facilitate wider adoption. In a second, more advanced tutorial, we illustrate time-dependent cSTMs

    A Tutorial on Time-Dependent Cohort State-Transition Models in R using a Cost-Effectiveness Analysis Example

    Full text link
    In an introductory tutorial, we illustrated building cohort state-transition models (cSTMs) in R, where the state transitions probabilities were constant over time. However, in practice, many cSTMs require transitions, rewards, or both to vary over time (time-dependent). This tutorial illustrates adding two types of time-dependency using a previously published cost-effectiveness analysis of multiple strategies as an example. The first is simulation-time dependence, which allows for the transition probabilities to vary as a function of time as measured since the start of the simulation (e.g., varying probability of death as the cohort ages). The second is state-residence time dependence, allowing for history by tracking the time spent in any particular health state using tunnel states. We use these time-dependent cSTMs to conduct cost-effectiveness and probabilistic sensitivity analyses. We also obtain various epidemiological outcomes of interest from the outputs generated from the cSTM, such as survival probability and disease prevalence, often used for model calibration and validation. We present the mathematical notation first, followed by the R code to execute the calculations. The full R code is provided in a public code repository for broader implementation.Comment: 34 pages, 7 figures. arXiv admin note: text overlap with arXiv:2001.0782
    corecore