74 research outputs found

    Microsimulation Modeling for Health Decision Sciences Using R: A Tutorial

    Get PDF
    Microsimulation models are becoming increasingly common in the field of decision modeling for health. Because microsimulation models are computationally more demanding than traditional Markov cohort models, the use of computer programming languages in their development has become more common. R is a programming language that has gained recognition within the field of decision modeling. It has the capacity to perform microsimulation models more efficiently than software commonly used for decision modeling, incorporate statistical analyses within decision models, and produce more transparent models and reproducible results. However, no clear guidance for the implementation of microsimulation models in R exists. In this tutorial, we provide a step-by-step guide to build microsimulation models in R and illustrate the use of this guide on a simple, but transferable, hypothetical decision problem. We guide the reader through the necessary steps and provide generic R code that is flexible and can be adapted for other models. We also show how this code can be extended to address more complex model structures and provide an efficient microsimulation approach that relies on vectorization solutions

    Addressing Pediatric HIV Pretreatment Drug Resistance and Virologic Failure in Sub-Saharan Africa: A Cost-Effectiveness Analysis of Diagnostic-Based Strategies in Children ≥3 Years Old

    Get PDF
    Improvement of antiretroviral therapy (ART) regimen switching practices and implementation of pretreatment drug resistance (PDR) testing are two potential approaches to improve health outcomes for children living with HIV. We developed a microsimulation model of disease progression and treatment focused on children with perinatally acquired HIV in sub-Saharan Africa who initiate ART at 3 years of age. We evaluated the cost-effectiveness of diagnostic-based strategies (improved switching and PDR testing), over a 10-year time horizon, in settings without and with pediatric dolutegravir (DTG) availability as first-line ART. The improved switching strategy increases the probability of switching to second-line ART when virologic failure is diagnosed through viral load testing. The PDR testing strategy involves a one-time PDR test prior to ART initiation to guide choice of initial regimen. When DTG is not available, PDR testing is dominated by the improved switching strategy, which has an incremental cost-effectiveness ratio (ICER) of USD 579/life-year gained (LY), relative to the status quo. If DTG is available, improved switching has a similar ICER (USD 591/LY) relative to the DTGstatus quo. Even when substantial financial investment is needed to achieve improved regimen switching practices, the improved switching strategy still has the potential to be cost-effective in a wide range of sub-Saharan African countries. Our analysis highlights the importance of strengthening existing laboratory monitoring systems to improve the health of children living with HIV

    A Multidimensional Array Representation of State-Transition Model Dynamics

    Get PDF
    Cost-effectiveness analyses often rely on cohort state-transition models (cSTMs). The cohort trace is the primary outcome of cSTMs, which captures the proportion of the cohort in each health state over time (state occupancy). However, the cohort trace is an aggregated measure that does not capture information about the specific transitions among health states (transition dynamics). In practice, these transition dynamics are crucial in many applications, such as incorporating transition rewards or computing various epidemiological outcomes that could be used for model calibration and validation (e.g., disease incidence and lifetime risk). In this article, we propose an alternative approach to compute and store cSTMs outcomes that capture both state occupancy and transition dynamics. This approach produces a multidimensional array from which both the state occupancy and the transition dynamics can be recovered. We highlight the advantages of the multidimensional array over the traditional cohort trace and provide potential applications of the proposed approach with an example coded in R to facilitate the implementation of our method

    A Need for Change! A Coding Framework for Improving Transparency in Decision Modeling

    Get PDF
    The use of open-source programming languages, such as R, in health decision sciences is growing and has the potential to facilitate model transparency, reproducibility, and shareability. However, realizing this potential can be challenging. Models are complex and primarily built to answer a research question, with model sharing and transparency relegated to being secondary goals. Consequently, code is often neither well documented nor systematically organized in a comprehensible and shareable approach. Moreover, many decision modelers are not formally trained in computer programming and may lack good coding practices, further compounding the problem of model transparency. To address these challenges, we propose a high-level framework for model-based decision and cost-effectiveness analyses (CEA) in R. The proposed framework consists of a conceptual, modular structure and coding recommendations for the implementation of model-based decision analyses in R. This framework defines a set of common decision model elements divided into five components: (1) model inputs, (2) decision model implementation, (3) model calibration, (4) model validation, and (5) analysis. The first four components form the model development phase. The analysis component is the application of the fully developed decision model to answer the policy or the research question of interest, assess decision uncertainty, and/or to determine the value of future research through value of information (VOI) analysis. In this framework, we also make recommendations for good coding practices specific to decision modeling, such as file organization and variable naming conventions. We showcase the framework through a fully functional, testbed decision model, which is hosted on GitHub for free download and easy adaptation to other applications. The use of this framework in decision modeling will improve code readability and model sharing, paving the way to an ideal, open-source world

    Albiglutide and cardiovascular outcomes in patients with type 2 diabetes and cardiovascular disease (Harmony Outcomes): a double-blind, randomised placebo-controlled trial

    Get PDF
    Background: Glucagon-like peptide 1 receptor agonists differ in chemical structure, duration of action, and in their effects on clinical outcomes. The cardiovascular effects of once-weekly albiglutide in type 2 diabetes are unknown. We aimed to determine the safety and efficacy of albiglutide in preventing cardiovascular death, myocardial infarction, or stroke. Methods: We did a double-blind, randomised, placebo-controlled trial in 610 sites across 28 countries. We randomly assigned patients aged 40 years and older with type 2 diabetes and cardiovascular disease (at a 1:1 ratio) to groups that either received a subcutaneous injection of albiglutide (30–50 mg, based on glycaemic response and tolerability) or of a matched volume of placebo once a week, in addition to their standard care. Investigators used an interactive voice or web response system to obtain treatment assignment, and patients and all study investigators were masked to their treatment allocation. We hypothesised that albiglutide would be non-inferior to placebo for the primary outcome of the first occurrence of cardiovascular death, myocardial infarction, or stroke, which was assessed in the intention-to-treat population. If non-inferiority was confirmed by an upper limit of the 95% CI for a hazard ratio of less than 1·30, closed testing for superiority was prespecified. This study is registered with ClinicalTrials.gov, number NCT02465515. Findings: Patients were screened between July 1, 2015, and Nov 24, 2016. 10 793 patients were screened and 9463 participants were enrolled and randomly assigned to groups: 4731 patients were assigned to receive albiglutide and 4732 patients to receive placebo. On Nov 8, 2017, it was determined that 611 primary endpoints and a median follow-up of at least 1·5 years had accrued, and participants returned for a final visit and discontinuation from study treatment; the last patient visit was on March 12, 2018. These 9463 patients, the intention-to-treat population, were evaluated for a median duration of 1·6 years and were assessed for the primary outcome. The primary composite outcome occurred in 338 (7%) of 4731 patients at an incidence rate of 4·6 events per 100 person-years in the albiglutide group and in 428 (9%) of 4732 patients at an incidence rate of 5·9 events per 100 person-years in the placebo group (hazard ratio 0·78, 95% CI 0·68–0·90), which indicated that albiglutide was superior to placebo (p<0·0001 for non-inferiority; p=0·0006 for superiority). The incidence of acute pancreatitis (ten patients in the albiglutide group and seven patients in the placebo group), pancreatic cancer (six patients in the albiglutide group and five patients in the placebo group), medullary thyroid carcinoma (zero patients in both groups), and other serious adverse events did not differ between the two groups. There were three (<1%) deaths in the placebo group that were assessed by investigators, who were masked to study drug assignment, to be treatment-related and two (<1%) deaths in the albiglutide group. Interpretation: In patients with type 2 diabetes and cardiovascular disease, albiglutide was superior to placebo with respect to major adverse cardiovascular events. Evidence-based glucagon-like peptide 1 receptor agonists should therefore be considered as part of a comprehensive strategy to reduce the risk of cardiovascular events in patients with type 2 diabetes. Funding: GlaxoSmithKline

    Perfectionism and competitive anxiety in athletes: Differentiating striving for perfection and negative reactions to imperfection

    Get PDF
    Whereas some researchers have argued that perfectionism in sports is maladaptive because it is related to dysfunctional characteristics Such as higher competitive anxiety, the present article argues that striving for perfection is not maladaptive and is unrelated to competitive anxiety. Four samples of athletes (high school athletes, female soccer players, and two samples of university student athletes) completed measures of perfectionism during competitions and competitive anxiety. Across samples, results show that overall perfectionism was associated with higher cognitive and somatic competitive anxiety. However, when striving for perfection and negative reactions to imperfection were differentiated, only the latter were associated with higher anxiety, whereas striving for perfection was unrelated to anxiety. Moreover, once the influence of negative reactions to imperfection was partialled out, striving for perfection was associated with lower anxiety and higher self-confidence. The present findings suggest that striving for perfection in sports is not maladaptive. On the contrary, athletes who strive for perfection and successfully control their negative reactions to imperfection may even experience less anxiety and more self-confidence during competitions

    Quantifying social contact patterns in Minnesota during stay-at-home social distancing order

    Full text link
    Abstract SARS-CoV-2 is primarily transmitted through person-to-person contacts. It is important to collect information on age-specific contact patterns because SARS-CoV-2 susceptibility, transmission, and morbidity vary by age. To reduce the risk of infection, social distancing measures have been implemented. Social contact data, which identify who has contact with whom especially by age and place are needed to identify high-risk groups and serve to inform the design of non-pharmaceutical interventions. We estimated and used negative binomial regression to compare the number of daily contacts during the first round (April–May 2020) of the Minnesota Social Contact Study, based on respondent’s age, gender, race/ethnicity, region, and other demographic characteristics. We used information on the age and location of contacts to generate age-structured contact matrices. Finally, we compared the age-structured contact matrices during the stay-at-home order to pre-pandemic matrices. During the state-wide stay-home order, the mean daily number of contacts was 5.7. We found significant variation in contacts by age, gender, race, and region. Adults between 40 and 50 years had the highest number of contacts. The way race/ethnicity was coded influenced patterns between groups. Respondents living in Black households (which includes many White respondents living in inter-racial households with black family members) had 2.7 more contacts than respondents in White households; we did not find this same pattern when we focused on individual’s reported race/ethnicity. Asian or Pacific Islander respondents or in API households had approximately the same number of contacts as respondents in White households. Respondents in Hispanic households had approximately two fewer contacts compared to White households, likewise Hispanic respondents had three fewer contacts than White respondents. Most contacts were with other individuals in the same age group. Compared to the pre-pandemic period, the biggest declines occurred in contacts between children, and contacts between those over 60 with those below 60

    The illusion of personal health decisions for infectious disease management: disease spread in social contact networks

    Full text link
    Close contacts between individuals provide opportunities for the transmission of diseases, including COVID-19. While individuals take part in many different types of interactions, including those with classmates, co-workers and household members, it is the conglomeration of all of these interactions that produces the complex social contact network interconnecting individuals across the population. Thus, while an individual might decide their own risk tolerance in response to a threat of infection, the consequences of such decisions are rarely so confined, propagating far beyond any one person. We assess the effect of different population-level risk-tolerance regimes, population structure in the form of age and household-size distributions, and different interaction types on epidemic spread in plausible human contact networks to gain insight into how contact network structure affects pathogen spread through a population. In particular, we find that behavioural changes by vulnerable individuals in isolation are insufficient to reduce those individuals’ infection risk and that population structure can have varied and counteracting effects on epidemic outcomes. The relative impact of each interaction type was contingent on assumptions underlying contact network construction, stressing the importance of empirical validation. Taken together, these results promote a nuanced understanding of disease spread on contact networks, with implications for public health strategies
    corecore