573 research outputs found

    SIMULATION IN PRACTICE: THE BALANCING INTERCEPT

    Get PDF
    Simulation is an important tool within epidemiology for both learning and developing new methodology (1–5). Unfortunately, few epidemiology training programs teach basic simulation methods. Briefly, when conducting a simulation experiment, we generally follow the same basic steps. We first decide which variables to include, as well as their distributions and associations—often aided by a causal diagram. We then generate those variables by sampling from their specified distributions and estimate whatever target parameter is of interest (e.g., sample average or causal effect). We finally repeat the processmultiple times, building a distribution for the target parameter from the estimates obtained in each replicate

    Lost Opportunities Concerning Loss-to-Follow-up: A Response to Elul et al

    Get PDF
    My colleagues and I read with great interest the recent publication from Elul et al., which attempts to ‘untangle’ the relationship between antiretroviral therapy (ART) use and incident pregnancy among HIV-positive women in East Africa. While we applaud the authors’ use of competing risk analysis and marginal structural models, we have concerns about their decision to treat loss-to-follow-up (LTFU) as a competing risk and for not considering the possibility of informative censoring

    Estimating Human Immunodeficiency Virus (HIV) Prevention Effects in Low-incidence Settings

    Get PDF
    Background: Randomized controlled trials (RCTs) for determining efficacy of preexposure prophylaxis (PrEP) in preventing human immunodeficiency virus (HIV) infection have not been conducted among US women because their lower HIV incidence requires impractically large studies. Results from higher-incidence settings, like Sub-Saharan Africa, may not apply to US women owing to differences in age, sexual behavior, coinfections, and adherence. Methods: We propose a novel strategy for evaluating PrEP efficacy in the United States using data from both settings to obtain four parameters: (1) intention-to-treat (ITT) and (2) per-protocol effects in the higher-incidence setting, (3) per-protocol effect generalized to the lower-incidence setting, and (4) back-calculated ITT effect using adherence data from the lower-incidence setting. To illustrate, we simulated two RCTs comparing PrEP against placebo: one in 4000 African women and another in 500 US women. We estimated all parameters using g-computation and report risk ratios averaged over 2000 simulations, alongside the 2.5th and 97.5th percentiles of the simulation results. Results: Twelve months after randomization, the African ITT and per-protocol risk ratios were 0.65 (0.47, 0.88) and 0.20 (0.08, 0.34), respectively. The US ITT and per-protocol risk ratios were 0.42 (0.20, 0.62) and 0.17 (0.03, 0.38), respectively. These results matched well the simulated true effects. Conclusions: Our simple demonstration informs the design of future studies seeking to estimate the effectiveness of a treatment (like PrEP) in lower-incidence settings where a traditional RCT would not be feasible. See video abstract at, http://links.lww.com/EDE/B506

    Estimating Associations Between Annual Concentrations of Particulate Matter and Mortality in the United States, Using Data Linkage and Bayesian Maximum Entropy

    Get PDF
    Background: Exposure to fine particulate matter (PM2.5) is an established risk factor for human mortality. However, previous US studies have been limited to select cities or regions or to population subsets (e.g., older adults). Methods: Here, we demonstrate how to use the novel geostatistical method Bayesian maximum entropy to obtain estimates of PM2.5 concentrations in all contiguous US counties, 2000–2016. We then demonstrate how one could use these estimates in a traditional epidemiologic analysis examining the association between PM2.5 and rates of all-cause, cardiovascular, respiratory, and (as a negative control outcome) accidental mortality. Results: We estimated that, for a 1 log(μg/m3) increase in PM2.5 concentration, the conditional all-cause mortality incidence rate ratio (IRR) was 1.029 (95% confidence interval [CI]: 1.006, 1.053). This implies that the rate of all-cause mortality at 10 µg/m3 would be 1.020 times the rate at 5 µg/m3. IRRs were larger for cardiovascular mortality than for all-cause mortality in all gender and race–ethnicity groups. We observed larger IRRs for all-cause, nonaccidental, and respiratory mortality in Black non-Hispanic Americans than White non-Hispanic Americans. However, our negative control analysis indicated the possibility for unmeasured confounding. Conclusion: We used a novel method that allowed us to estimate PM2.5 concentrations in all contiguous US counties and obtained estimates of the association between PM2.5 and mortality comparable to previous studies. Our analysis provides one example of how Bayesian maximum entropy could be used in epidemiologic analyses; future work could explore other ways to use this approach to inform important public health questions

    Using animations of risk functions to visualize trends in US all-cause and cause-specific mortality, 1968-2016

    Get PDF
    Objectives. To use dynamic visualizations of mortality risk functions over both calendar year and age as a way to estimate and visualize patterns in US life spans. Methods. We built 49 synthetic cohorts, 1 per year 1968 to 2016, using National Center for Health Statistics (NCHS) mortality and population data. Within each cohort, we estimated age-specific probabilities of dying from any cause (all-cause analysis) or from a particular cause (cause-specific analysis). We then used Kaplan–Meier (all-cause) or Aalen–Johansen (cause-specific) estimators to obtain risk functions. We illustrated risk functions using time-lapse animations. Results. Median age at death increased from 75 years in 1970 to 83 years in 2015. Risk by age 100 years of cardiovascular mortality decreased (from a risk of 55% in 1970 to 32% in 2015), whereas risk attributable to other (i.e., nonrespiratory and noncardiovascular) causes increased in compensation. Conclusions. Our findings were consistent with the trends published in the NCHS 2015 mortality report, and our dynamic animations added an efficient, interpretable tool for visualizing US mortality trends over age and calendar time

    Demographic Trends in US HIV Diagnoses, 2008–2017: Data Movies

    Get PDF
    In this editorial, we introduce the data movie as a tool for investigating and communicating changing patterns of disease using the example of HIV in the United States. The Centers for Disease Control and Prevention currently tracks all new HIV diagnoses through the National HIV Surveillance System. Understanding what these data tell us is critical to the goal of ending the HIV epidemic in the United States.1 However, summarizing trends across multiple population characteristics simultaneously—for example, exploring how the age distribution of new diagnoses varies by geographic region and how that relationship has changed over time—can be difficult. Because data movies allow us to visualize complex relationships more easily than large tables or paneled figures, they can help us take full advantage of our increasingly rich national surveillance data

    A gauge model for quantum mechanics on a stratified space

    Full text link
    In the Hamiltonian approach on a single spatial plaquette, we construct a quantum (lattice) gauge theory which incorporates the classical singularities. The reduced phase space is a stratified K\"ahler space, and we make explicit the requisite singular holomorphic quantization procedure on this space. On the quantum level, this procedure furnishes a costratified Hilbert space, that is, a Hilbert space together with a system which consists of the subspaces associated with the strata of the reduced phase space and of the corresponding orthoprojectors. The costratified Hilbert space structure reflects the stratification of the reduced phase space. For the special case where the structure group is SU(2)\mathrm{SU}(2), we discuss the tunneling probabilities between the strata, determine the energy eigenstates and study the corresponding expectation values of the orthoprojectors onto the subspaces associated with the strata in the strong and weak coupling approximations.Comment: 38 pages, 9 figures. Changes: comments on the heat kernel and coherent states have been adde

    At-Risk Alcohol Use Among HIV-Positive Patients and the Completion of Patient-Reported Outcomes

    Get PDF
    Heavy drinking is prevalent among people living with HIV. Studies use tools like patient-reported outcomes (PROs) to quantify alcohol use in a detailed, timely manner. However, if alcohol misuse influences PRO completion, selection bias may result. Our study included 14,145 adult HIV patients (133,036 visits) from CNICS who were eligible to complete PROs at an HIV primary care visit. We compared PRO completion proportions between patients with and without a clinical diagnosis of at-risk alcohol use in the prior year. We accounted for confounding by baseline and visit-specific covariates. PROs were completed at 20.8% of assessed visits. The adjusted difference in PRO completion proportions was -3.2% (95% CI -5.6 to -0.8%). The small association between receipt of an at-risk alcohol use diagnosis and decreased PRO completion suggests there could be modest selection bias in studies using the PRO alcohol measure

    Translating Message Sequence Charts to other Process Languages Using Process Mining

    Full text link
    Message Sequence Charts (MSCs) are often used by software analysts when discussing the behavior of a system with different stakeholders. Often such discussions lead to more complete behavioral models in the form of, e.g., Event-driven Process Chains (EPCs), Unified Modeling Language (UML), activity diagrams, Business Process Modeling Notation (BPMN) models, Petri nets, etc. Process mining on the other hand, deals with the problem of constructing complete behavioral models by analyzing event logs of information systems. In contrast to existing process mining techniques, where logs are assumed to only contain implicit information, the approach presented in this paper combines the explicit knowledge captured in individual MSCs and the techniques and tools available in the process mining domain. This combination allows us to discover high-quality process models. To constructively add to the existing work on process mining, our approach has been implemented in the process mining framework ProM (www.processmining.org)

    PYTHIA 6.4 Physics and Manual

    Full text link
    The PYTHIA program can be used to generate high-energy-physics `events', i.e. sets of outgoing particles produced in the interactions between two incoming particles. The objective is to provide as accurate as possible a representation of event properties in a wide range of reactions, within and beyond the Standard Model, with emphasis on those where strong interactions play a role, directly or indirectly, and therefore multihadronic final states are produced. The physics is then not understood well enough to give an exact description; instead the program has to be based on a combination of analytical results and various QCD-based models. This physics input is summarized here, for areas such as hard subprocesses, initial- and final-state parton showers, underlying events and beam remnants, fragmentation and decays, and much more. Furthermore, extensive information is provided on all program elements: subroutines and functions, switches and parameters, and particle and process data. This should allow the user to tailor the generation task to the topics of interest.Comment: 576 pages, no figures, uses JHEP3.cls. The code and further information may be found on the PYTHIA web page: http://www.thep.lu.se/~torbjorn/Pythia.html Changes in version 2: Mistakenly deleted section heading for "Physics Processes" reinserted, affecting section numbering. Minor updates to take into account referee comments and new colour reconnection option
    corecore