64 research outputs found

    Population-level antibiotic treatment policies in the setting of antibiotic resistance: A mathematical model of mass treatment of Helicobacter pylori in Mexico

    Get PDF
    Conference poster presented at the 39th Annual Meeting of the Society for Medical Decision Making Pittsburgh, Pennsylvania, October 22-25, 2017Purpose: Helicobacter pylori (H. pylori) is the strongest known risk factor for gastric cancer and peptic ulcer disease. Programs under consideration in high risk countries to prevent H. pylori- related diseases via broad population treatment could be complicated by increasing levels of antibiotic resistance (ABR). We evaluate the impact of different mass-treatment policies on H. pylori infection and ABR in Mexico using a mathematical model. Methods: We developed an age-structured, susceptible-infected-susceptible (SIS) transmission model of H. pylori infection in Mexico that included both treatment-sensitive and treatment- resistant strains. Antibiotic treatment was assumed to either clear sensitive strains or induce acquired resistance. In addition, the model included the effects of both background antibiotic use and antibiotic treatment specifically intended to treat H. pylori infection. Model parameters were derived from the published literature and estimated from primary data. Using the model, we projected H. pylori infection and resistance levels over 20 years without treatment and for three hypothetical population-wide treatment policies assumed to be implemented in 2018: (1) treat children only (2-6 year-olds); (2) treat older adults only (>40 years old); (3) treat everyone regardless of age. Clarithromycin -introduced in Mexico in 1991- was the antibiotic considered for the treatment policies. In sensitivity analyses, we considered different mixing patterns and trends of background antibiotic use. We validated the model against historical values of prevalence of infection and ABR of H. pylori. Results: In the absence of a mass-treatment policy, our model predicts infection begins to rise in 2021, mostly caused by treatment-induced resistant strains as a product of background use of antibiotics. The impact of the policies is immediate on decreasing infection but also increasing ABR (see Figure). For example, policy 3 decreases infection by 11% but increases ABR by 23% after the first year of implementation. The relative size of the decrease in infection is 50% the increase in ABR for policies 2 and 3, and 20% for policy 1. These results agree across all scenarios considered in sensitivity analysis. Conclusions: Conclusions: Mass-treatment policies have a higher effect on increasing ABR letting resistant strains take over infection. Given the high proportion of ABR at the time of the policy implementation, mass treatment strategies are not recommended for Mexico

    Statistical and mathematical modeling to evaluate the cost-effectiveness of Helicobacter pylori screening and treating strategies in Mexico in the setting of antibiotic resistance

    Get PDF
    University of Minnesota Ph.D. dissertation. August 2017. Major: Health Services Research, Policy and Administration. Advisors: Karen Kuntz, Eva Enns. 1 computer file (PDF); ix, 128 pages.Helicobacter pylori (H. pylori), a bacterium that is present in the stomach of half of the world’s population with disproportionate burden in developing countries, is the strongest known biological risk factor for gastric cancer. Gastric cancer is the fourth most common type of cancer and the second cause of cancer death in the world. In particular, in Mexico gastric cancer is the third highest cause of cancer death in adults, with some regions having cancer mortality rates that are twice the national average (8.0 vs. 3.9 per 100,000, respectively). H. pylori can be treated with antibiotics, but widespread treatment may lead to significant levels of antibiotic resistance (ABR). ABR is one of the main causes of H. pylori treatment failure and represents one of the greatest emerging global health threats. In this thesis, we use statistical and mathematical modeling to investigate the health benefits, harms, costs and cost-effectiveness of screen-and-treat strategies for identifying and treating persons with H. pylori to inform public health practice in three steps. First, we estimated the age-specific force of infection of H. pylori --defined as the instantaneous per capita rate at which susceptibles acquire infection-- using a novel hierarchical nonlinear Bayesian catalytic epidemic model with data from a national H. pylori seroepidemiology survey in Mexico. Second, we developed an age-structured, susceptible-infected-susceptible (SIS) transmission model of H. pylori infection in Mexico that included both treatment-sensitive and treatment-resistant strains. Model parameters were derived from the published literature and estimated from primary data. Using the model, we projected H. pylori infection and resistance levels over 20 years without treatment and for three hypothetical population-wide treatment policies assumed to be implemented in 2018. In sensitivity analyses, we considered different mixing patterns and trends of background antibiotic use. We validated the model against historical values of prevalence of infection and ABR of H. pylori. Third, we expanded the SIS model to incorporate the natural history of gastric carcinogenesis including gastritis, intestinal metaplasia, dysplasia and ultimately non-cardia gastric cancer. We then estimated the cost-effectiveness of various screen-and-treat strategies for H. pylori infection and ABR in the Mexican population from the health sector perspective

    Potential bias associated with modeling the effectiveness of treatment using an overall hazard ratio

    Get PDF
    Poster presented at the 36th Annual Meeting of the Society for Medical Decision Making in Miami, FL, October 2014Purpose: Clinical trials often report treatment efficacy in terms of the reduction of all-cause mortality [i.e., overall hazard ratio (OHR)], and not the reduction in disease-specific mortality [i.e., disease-specific hazard ratio (DSHR)]. Using an OHR to reduce all-cause mortality beyond the time horizon of the clinical trial may introduce bias if the relative proportion of other-cause mortality increases with age. We aim to quantify this bias. Methods: We simulated a hypothetical cohort of patients with a generic disease that increases the age-, sex-, and race-specific mortality rate (μASR) by a constant additive disease-specific rate (μDis). We assumed a DSHR of 0.75 (unreported) and an OHR of 0.80 (reported, derived from DSHR and assumptions of clinical trial population). We quantified the bias in terms of the difference in life expectancy (LE) gains with treatment between using an OHR approach to reduce all-cause mortality over a lifetime [(μASR+ μDis)xOHR] compared with using a DSHR approach to reduce disease-specific mortality [μASR+(μDis)xDSHR]. We varied the starting age of the cohort from 40 to 70 years old. Results: The OHR bias increases as DSHR decreases and with younger starting ages of the cohort. For a cohort of 60 year-old sick patients, the mortality rate under OHR approach crosses μASR at the age of 90 (see figure) and LE gain is overestimated by 0.6 years (a 3.7% increase). We also used OHR as an estimate of DSHR [μASR+(μDis)xOHR] (as the latter is not often reported). This resulted in a slight shift in the mortality rate compared to the DSHR approach (see figure), yielding in an underestimation of the LE gain. Conclusions: The use of an OHR approach to model treatment effectiveness beyond the time horizon of the trial overestimates the effectiveness of the treatment. Under an OHR approach it is possible that sick individuals at some point will face a lower mortality rate compared with healthy individuals. We recommend either to derive a DSHR from trials and use the DSHR approach, or to use the OHR as an estimate of DSHR in the model, which is a conservative assumption

    Potential bias associated with modeling the effectiveness of treatment using an overall hazard ratio

    Get PDF
    Poster presented at the 36th Annual Meeting of the Society for Medical Decision Making in Miami, FL, October 2014Purpose: Clinical trials often report treatment efficacy in terms of the reduction of all-cause mortality [i.e., overall hazard ratio (OHR)], and not the reduction in disease-specific mortality [i.e., disease-specific hazard ratio (DSHR)]. Using an OHR to reduce all-cause mortality beyond the time horizon of the clinical trial may introduce bias if the relative proportion of other-cause mortality increases with age. We aim to quantify this bias. Methods: We simulated a hypothetical cohort of patients with a generic disease that increases the age-, sex-, and race-specific mortality rate (μASR) by a constant additive disease-specific rate (μDis). We assumed a DSHR of 0.75 (unreported) and an OHR of 0.80 (reported, derived from DSHR and assumptions of clinical trial population). We quantified the bias in terms of the difference in life expectancy (LE) gains with treatment between using an OHR approach to reduce all-cause mortality over a lifetime [(μASR+ μDis)xOHR] compared with using a DSHR approach to reduce disease-specific mortality [μASR+(μDis)xDSHR]. We varied the starting age of the cohort from 40 to 70 years old. Results: The OHR bias increases as DSHR decreases and with younger starting ages of the cohort. For a cohort of 60 year-old sick patients, the mortality rate under OHR approach crosses μASR at the age of 90 (see figure) and LE gain is overestimated by 0.6 years (a 3.7% increase). We also used OHR as an estimate of DSHR [μASR+(μDis)xOHR] (as the latter is not often reported). This resulted in a slight shift in the mortality rate compared to the DSHR approach (see figure), yielding in an underestimation of the LE gain. Conclusions: The use of an OHR approach to model treatment effectiveness beyond the time horizon of the trial overestimates the effectiveness of the treatment. Under an OHR approach it is possible that sick individuals at some point will face a lower mortality rate compared with healthy individuals. We recommend either to derive a DSHR from trials and use the DSHR approach, or to use the OHR as an estimate of DSHR in the model, which is a conservative assumption

    Characterization and valuation of uncertainty of calibrated parameters in stochastic decision models

    Full text link
    We evaluated the implications of different approaches to characterize uncertainty of calibrated parameters of stochastic decision models (DMs) in the quantified value of such uncertainty in decision making. We used a microsimulation DM of colorectal cancer (CRC) screening to conduct a cost-effectiveness analysis (CEA) of a 10-year colonoscopy screening. We calibrated the natural history model of CRC to epidemiological data with different degrees of uncertainty and obtained the joint posterior distribution of the parameters using a Bayesian approach. We conducted a probabilistic sensitivity analysis (PSA) on all the model parameters with different characterizations of uncertainty of the calibrated parameters and estimated the value of uncertainty of the different characterizations with a value of information analysis. All analyses were conducted using high performance computing resources running the Extreme-scale Model Exploration with Swift (EMEWS) framework. The posterior distribution had high correlation among some parameters. The parameters of the Weibull hazard function for the age of onset of adenomas had the highest posterior correlation of -0.958. Considering full posterior distributions and the maximum-a-posteriori estimate of the calibrated parameters, there is little difference on the spread of the distribution of the CEA outcomes with a similar expected value of perfect information (EVPI) of \$653 and \$685, respectively, at a WTP of \$66,000/QALY. Ignoring correlation on the posterior distribution of the calibrated parameters, produced the widest distribution of CEA outcomes and the highest EVPI of \$809 at the same WTP. Different characterizations of uncertainty of calibrated parameters have implications on the expect value of reducing uncertainty on the CEA. Ignoring inherent correlation among calibrated parameters on a PSA overestimates the value of uncertainty.Comment: 17 pages, 6 figures, 3 table

    Calculating the Expected Value of Sample Information in Practice: Considerations from Three Case Studies

    Full text link
    Investing efficiently in future research to improve policy decisions is an important goal. Expected Value of Sample Information (EVSI) can be used to select the specific design and sample size of a proposed study by assessing the benefit of a range of different studies. Estimating EVSI with the standard nested Monte Carlo algorithm has a notoriously high computational burden, especially when using a complex decision model or when optimizing over study sample sizes and designs. Therefore, a number of more efficient EVSI approximation methods have been developed. However, these approximation methods have not been compared and therefore their relative advantages and disadvantages are not clear. A consortium of EVSI researchers, including the developers of several approximation methods, compared four EVSI methods using three previously published health economic models. The examples were chosen to represent a range of real-world contexts, including situations with multiple study outcomes, missing data, and data from an observational rather than a randomized study. The computational speed and accuracy of each method were compared, and the relative advantages and implementation challenges of the methods were highlighted. In each example, the approximation methods took minutes or hours to achieve reasonably accurate EVSI estimates, whereas the traditional Monte Carlo method took weeks. Specific methods are particularly suited to problems where we wish to compare multiple proposed sample sizes, when the proposed sample size is large, or when the health economic model is computationally expensive. All the evaluated methods gave estimates similar to those given by traditional Monte Carlo, suggesting that EVSI can now be efficiently computed with confidence in realistic examples.Comment: 11 pages, 3 figure

    Microsimulation Modeling for Health Decision Sciences Using R: A Tutorial

    Get PDF
    Microsimulation models are becoming increasingly common in the field of decision modeling for health. Because microsimulation models are computationally more demanding than traditional Markov cohort models, the use of computer programming languages in their development has become more common. R is a programming language that has gained recognition within the field of decision modeling. It has the capacity to perform microsimulation models more efficiently than software commonly used for decision modeling, incorporate statistical analyses within decision models, and produce more transparent models and reproducible results. However, no clear guidance for the implementation of microsimulation models in R exists. In this tutorial, we provide a step-by-step guide to build microsimulation models in R and illustrate the use of this guide on a simple, but transferable, hypothetical decision problem. We guide the reader through the necessary steps and provide generic R code that is flexible and can be adapted for other models. We also show how this code can be extended to address more complex model structures and provide an efficient microsimulation approach that relies on vectorization solutions

    A Multidimensional Array Representation of State-Transition Model Dynamics

    Get PDF
    Cost-effectiveness analyses often rely on cohort state-transition models (cSTMs). The cohort trace is the primary outcome of cSTMs, which captures the proportion of the cohort in each health state over time (state occupancy). However, the cohort trace is an aggregated measure that does not capture information about the specific transitions among health states (transition dynamics). In practice, these transition dynamics are crucial in many applications, such as incorporating transition rewards or computing various epidemiological outcomes that could be used for model calibration and validation (e.g., disease incidence and lifetime risk). In this article, we propose an alternative approach to compute and store cSTMs outcomes that capture both state occupancy and transition dynamics. This approach produces a multidimensional array from which both the state occupancy and the transition dynamics can be recovered. We highlight the advantages of the multidimensional array over the traditional cohort trace and provide potential applications of the proposed approach with an example coded in R to facilitate the implementation of our method

    A Need for Change! A Coding Framework for Improving Transparency in Decision Modeling

    Get PDF
    The use of open-source programming languages, such as R, in health decision sciences is growing and has the potential to facilitate model transparency, reproducibility, and shareability. However, realizing this potential can be challenging. Models are complex and primarily built to answer a research question, with model sharing and transparency relegated to being secondary goals. Consequently, code is often neither well documented nor systematically organized in a comprehensible and shareable approach. Moreover, many decision modelers are not formally trained in computer programming and may lack good coding practices, further compounding the problem of model transparency. To address these challenges, we propose a high-level framework for model-based decision and cost-effectiveness analyses (CEA) in R. The proposed framework consists of a conceptual, modular structure and coding recommendations for the implementation of model-based decision analyses in R. This framework defines a set of common decision model elements divided into five components: (1) model inputs, (2) decision model implementation, (3) model calibration, (4) model validation, and (5) analysis. The first four components form the model development phase. The analysis component is the application of the fully developed decision model to answer the policy or the research question of interest, assess decision uncertainty, and/or to determine the value of future research through value of information (VOI) analysis. In this framework, we also make recommendations for good coding practices specific to decision modeling, such as file organization and variable naming conventions. We showcase the framework through a fully functional, testbed decision model, which is hosted on GitHub for free download and easy adaptation to other applications. The use of this framework in decision modeling will improve code readability and model sharing, paving the way to an ideal, open-source world

    Emulator-based Bayesian calibration of the CISNET colorectal cancer models

    Get PDF
    PURPOSE: To calibrate Cancer Intervention and Surveillance Modeling Network (CISNET) 's SimCRC, MISCAN-Colon, and CRC-SPIN simulation models of the natural history colorectal cancer (CRC) with an emulator-based Bayesian algorithm and internally validate the model-predicted outcomes to calibration targets.METHODS: We used Latin hypercube sampling to sample up to 50,000 parameter sets for each CISNET-CRC model and generated the corresponding outputs. We trained multilayer perceptron artificial neural networks (ANN) as emulators using the input and output samples for each CISNET-CRC model. We selected ANN structures with corresponding hyperparameters (i.e., number of hidden layers, nodes, activation functions, epochs, and optimizer) that minimize the predicted mean square error on the validation sample. We implemented the ANN emulators in a probabilistic programming language and calibrated the input parameters with Hamiltonian Monte Carlo-based algorithms to obtain the joint posterior distributions of the CISNET-CRC models' parameters. We internally validated each calibrated emulator by comparing the model-predicted posterior outputs against the calibration targets.RESULTS: The optimal ANN for SimCRC had four hidden layers and 360 hidden nodes, MISCAN-Colon had 4 hidden layers and 114 hidden nodes, and CRC-SPIN had one hidden layer and 140 hidden nodes. The total time for training and calibrating the emulators was 7.3, 4.0, and 0.66 hours for SimCRC, MISCAN-Colon, and CRC-SPIN, respectively. The mean of the model-predicted outputs fell within the 95% confidence intervals of the calibration targets in 98 of 110 for SimCRC, 65 of 93 for MISCAN, and 31 of 41 targets for CRC-SPIN.CONCLUSIONS: Using ANN emulators is a practical solution to reduce the computational burden and complexity for Bayesian calibration of individual-level simulation models used for policy analysis, like the CISNET CRC models.</p
    corecore