1,898 research outputs found

    Enhancing Bayesian risk prediction for epidemics using contact tracing

    Full text link
    Contact tracing data collected from disease outbreaks has received relatively little attention in the epidemic modelling literature because it is thought to be unreliable: infection sources might be wrongly attributed, or data might be missing due to resource contraints in the questionnaire exercise. Nevertheless, these data might provide a rich source of information on disease transmission rate. This paper presents novel methodology for combining contact tracing data with rate-based contact network data to improve posterior precision, and therefore predictive accuracy. We present an advancement in Bayesian inference for epidemics that assimilates these data, and is robust to partial contact tracing. Using a simulation study based on the British poultry industry, we show how the presence of contact tracing data improves posterior predictive accuracy, and can directly inform a more effective control strategy.Comment: 40 pages, 9 figures. Submitted to Biostatistic

    Vertically Shifted Mixture Models for Clustering Longitudinal Data by Shape

    Get PDF
    Longitudinal studies play a prominent role in health, social and behavioral sciences as well as in the biological sciences, economics, and marketing. By following subjects over time, temporal changes in an outcome of interest can be directly observed and studied. An important question concerns the existence of distinct trajectory patterns. One way to determine these distinct patterns is through cluster analysis, which seeks to separate objects (subjects, patients, observational units) into homogeneous groups. Many methods have been adapted for longitudinal data, but almost all of them fail to explicitly group trajectories according to distinct pattern shapes. To fulfill the need for clustering based explicitly on shape, we propose vertically shifting the data by subtracting the subject-specific mean directly removes the level prior to fitting a mixture modeling. This non-invertible transformation can result in singular covariance matrixes, which makes mixture model estimation difficult. Despite the challenges, this method outperforms existing clustering methods in a simulation study

    An approach for benchmarking the numerical solutions of stochastic compartmental models

    Full text link
    An approach is introduced for comparing the estimated states of stochastic compartmental models for an epidemic or biological process with analytically obtained solutions from the corresponding system of ordinary differential equations (ODEs). Positive integer valued samples from a stochastic model are generated numerically at discrete time intervals using either the Reed-Frost chain Binomial or Gillespie algorithm. The simulated distribution of realisations is compared with an exact solution obtained analytically from the ODE model. Using this novel methodology this work demonstrates it is feasible to check that the realisations from the stochastic compartmental model adhere to the ODE model they represent. There is no requirement for the model to be in any particular state or limit. These techniques are developed using the stochastic compartmental model for a susceptible-infected-recovered (SIR) epidemic process. The Lotka-Volterra model is then used as an example of the generality of the principles developed here. This approach presents a way of testing/benchmarking the numerical solutions of stochastic compartmental models, e.g. using unit tests, to check that the computer code along with its corresponding algorithm adheres to the underlying ODE model.Comment: 21 pages 3 figure

    Application of Monte Carlo Algorithms to the Bayesian Analysis of the Cosmic Microwave Background

    Get PDF
    Power spectrum estimation and evaluation of associated errors in the presence of incomplete sky coverage; non-homogeneous, correlated instrumental noise; and foreground emission is a problem of central importance for the extraction of cosmological information from the cosmic microwave background. We develop a Monte Carlo approach for the maximum likelihood estimation of the power spectrum. The method is based on an identity for the Bayesian posterior as a marginalization over unknowns. Maximization of the posterior involves the computation of expectation values as a sample average from maps of the cosmic microwave background and foregrounds given some current estimate of the power spectrum or cosmological model, and some assumed statistical characterization of the foregrounds. Maps of the CMB are sampled by a linear transform of a Gaussian white noise process, implemented numerically with conjugate gradient descent. For time series data with N_{t} samples, and N pixels on the sphere, the method has a computational expense $KO[N^{2} +- N_{t} +AFw-log N_{t}], where K is a prefactor determined by the convergence rate of conjugate gradient descent. Preconditioners for conjugate gradient descent are given for scans close to great circle paths, and the method allows partial sky coverage for these cases by numerically marginalizing over the unobserved, or removed, region.Comment: submitted to Ap

    The upper critical field of filamentary Nb3Sn conductors

    Get PDF
    We have examined the upper critical field of a large and representative set of present multi-filamentary Nb3Sn wires and one bulk sample over a temperature range from 1.4 K up to the zero field critical temperature. Since all present wires use a solid-state diffusion reaction to form the A15 layers, inhomogeneities with respect to Sn content are inevitable, in contrast to some previously studied homogeneous samples. Our study emphasizes the effects that these inevitable inhomogeneities have on the field-temperature phase boundary. The property inhomogeneities are extracted from field-dependent resistive transitions which we find broaden with increasing inhomogeneity. The upper 90-99 % of the transitions clearly separates alloyed and binary wires but a pure, Cu-free binary bulk sample also exhibits a zero temperature critical field that is comparable to the ternary wires. The highest mu0Hc2 detected in the ternary wires are remarkably constant: The highest zero temperature upper critical fields and zero field critical temperatures fall within 29.5 +/- 0.3 T and 17.8 +/- 0.3 K respectively, independent of the wire layout. The complete field-temperature phase boundary can be described very well with the relatively simple Maki-DeGennes model using a two parameter fit, independent of composition, strain state, sample layout or applied critical state criterion.Comment: Accepted Journal of Applied Physics Few changes to shorten document, replaced eq. 7-

    Modelling the impact of social mixing and behaviour on infectious disease transmission: application to SARS-CoV-2

    Full text link
    In regard to infectious diseases socioeconomic determinants are strongly associated with differential exposure and susceptibility however they are seldom accounted for by standard compartmental infectious disease models. These associations are explored here with a novel compartmental infectious disease model which, stratified by deprivation and age, accounts for population-level behaviour including social mixing patterns. As an exemplar using a fully Bayesian approach our model is fitted, in real-time if required, to the UKHSA COVID-19 community testing case data from England. Metrics including reproduction number and forecasts of daily case incidence are estimated from the posterior samples. From this UKHSA dataset it is observed that during the initial period of the pandemic the most deprived groups reported the most cases however this trend reversed after the summer of 2021. Forward simulation experiments based on the fitted model demonstrate that this reversal can be accounted for by differential changes in population level behaviours including social mixing and testing behaviour, but it is not explained by the depletion of susceptible individuals. In future epidemics, with a focus on socioeconomic factors the approach outlined here provides the possibility of identifying those groups most at risk with a view to helping policy-makers better target their support.Comment: Main article: 25 pages, 6 figures. Appendix 2 pages, 1 figure. Supplementary Material: 15 pages, 14 figures. Version 2 - minor updates: fixed typos, updated mathematical notation and small quantity of descriptive text added. Version 3 - minor update: made colour coding consistent across all time series figure

    Networks and the epidemiology of infectious disease

    Get PDF
    The science of networks has revolutionised research into the dynamics of interacting elements. It could be argued that epidemiology in particular has embraced the potential of network theory more than any other discipline. Here we review the growing body of research concerning the spread of infectious diseases on networks, focusing on the interplay between network theory and epidemiology. The review is split into four main sections, which examine: the types of network relevant to epidemiology; the multitude of ways these networks can be characterised; the statistical methods that can be applied to infer the epidemiological parameters on a realised network; and finally simulation and analytical methods to determine epidemic dynamics on a given network. Given the breadth of areas covered and the ever-expanding number of publications, a comprehensive review of all work is impossible. Instead, we provide a personalised overview into the areas of network epidemiology that have seen the greatest progress in recent years or have the greatest potential to provide novel insights. As such, considerable importance is placed on analytical approaches and statistical methods which are both rapidly expanding fields. Throughout this review we restrict our attention to epidemiological issues

    Bayesian inference for high-dimensional discrete-time epidemic models: spatial dynamics of the UK COVID-19 outbreak

    Full text link
    In the event of a disease outbreak emergency, such as COVID-19, the ability to construct detailed stochastic models of infection spread is key to determining crucial policy-relevant metrics such as the reproduction number, true prevalence of infection, and the contribution of population characteristics to transmission. In particular, the interaction between space and human mobility is key to prioritising outbreak control resources to appropriate areas of the country. Model-based epidemiological intelligence must therefore be provided in a timely fashion so that resources can be adapted to a changing disease landscape quickly. The utility of these models is reliant on fast and accurate parameter inference, with the ability to account for large amount of censored data to ensure estimation is unbiased. Yet methods to fit detailed spatial epidemic models to national-level population sizes currently do not exist due to the difficulty of marginalising over the censored data. In this paper we develop a Bayesian data-augmentation method which operates on a stochastic spatial metapopulation SEIR state-transition model, using model-constrained Metropolis-Hastings samplers to improve the efficiency of an MCMC algorithm. Coupling this method with state-of-the-art GPU acceleration enabled us to provide nightly analyses of the UK COVID-19 outbreak, with timely information made available for disease nowcasting and forecasting purposes

    Chamber basis of the Orlik-Solomon algebra and Aomoto complex

    Full text link
    We introduce a basis of the Orlik-Solomon algebra labeled by chambers, so called chamber basis. We consider structure constants of the Orlik-Solomon algebra with respect to the chamber basis and prove that these structure constants recover D. Cohen's minimal complex from the Aomoto complex.Comment: 16 page

    Nonparametric Estimation of the Case Fatality Ratio with Competing Risks Data: An Application to Severe Acute Respiratory Syndome (SARS)

    Get PDF
    For diseases with some level of associated mortality, the case fatality ratio measures the proportion of diseased individuals who die from the disease. In principle, it is straightforward to estimate this quantity from individual follow-up data that provides times from onset to death or recovery. In particular, in a competing risks context, the case fatality ratio is defined by the limiting value of the sub-distribution function, associated with death, at infinity. When censoring is present, however, estimation of this quantity is complicated by the possibility of little information in the right tail of of the sub-distribution function, requiring use of estimators evaluated at large or the largest observed death times. With right censoring, the variability of such estimators is large in the tail, suggesting the possibility of using estimators evaluated at smaller death times where bias may be increased but overall mean squared error be smaller. These issues are investigated here for nonparametric estimators of the sub-distribution functions for both death and recovery. The ideas are illustrated on case fatality data for individuals infected with severe acute respiratory syndrome (SARS) in Hong Kong in 2003
    corecore