3,047 research outputs found

    Enhancing Bayesian risk prediction for epidemics using contact tracing

    Full text link
    Contact tracing data collected from disease outbreaks has received relatively little attention in the epidemic modelling literature because it is thought to be unreliable: infection sources might be wrongly attributed, or data might be missing due to resource contraints in the questionnaire exercise. Nevertheless, these data might provide a rich source of information on disease transmission rate. This paper presents novel methodology for combining contact tracing data with rate-based contact network data to improve posterior precision, and therefore predictive accuracy. We present an advancement in Bayesian inference for epidemics that assimilates these data, and is robust to partial contact tracing. Using a simulation study based on the British poultry industry, we show how the presence of contact tracing data improves posterior predictive accuracy, and can directly inform a more effective control strategy.Comment: 40 pages, 9 figures. Submitted to Biostatistic

    Retention and Graduation Rates at Public Research Universities: Do Medical Centers Affect Rates?

    Get PDF
    Retention and six-year graduation rates have increased in relevance and importance within the last decade. As costs for post-secondary education continue to rise, the need to graduate on time becomes more important to both the student and the institution. Public, four-year, research universities currently have a 63 percent six-year graduation rate over the past decade (U.S. Department of Education). An average 20 percent of the students entering these same institutions are leaving after their freshman year (U.S. Department of Education). Institutions across the United States have started prioritizing these measures of success. The goal of this research study is to examine the amount that certain variables may affect student success in post-secondary education. The current issue facing institutions is how to increase first-year retention rates and continue to maintain the student enrollment until graduation. A variety of factors that are commonly associated with retention and graduation rates in the literature are included in the analysis. This study attempts to fill a gap in the literature concerning institutions with medical centers on campus. There are two research questions. 1) Does the existence of a medical center affect expenditure patterns? 2) Does a medical center on campus affect six-year graduation rates or retention rates either directly or indirectly? This study included 137 four-year, public, research universities in the United States. Approximately half of the institutions have a medical center on campus. The panel data set is from the Integrated Postsecondary Education Data System (IPEDS) spanning the years 2008 to 2013. This study used a between effects regression analysis to estimate the effect of average levels of the cost of instruction on a variety of variables. I also completed both a between effects regression analysis and a fixed effects regression analysis to estimate the effects of average levels and changes, respectively, of retention and graduation rates. The analysis shows that the existence of a medical center on campus affects expenditure patterns. Institutions with medical centers spend on average $6,300 more on instruction per student. There were statistically significant results with percent of students admitted and student-faculty ratio as well. The greater amount the cost of instruction per student yields a lower student-faculty ratio. The results show that there is no statistical evidence that medical centers affect six-year graduation rates or retention rates. Therefore, it is no more likely for a student to succeed if they attend an institution with a medical center on campus or an institution without a medical center on campus. Student success often relates to other factors of the university. Variables such as out-of-state cost, per cent admitted, and ethnicity do impact retention and graduation rates

    Segmented coronagraph design and analysis (SCDA): an initial design study of apodized vortex coronagraphs

    Get PDF
    The segmented coronagraph design and analysis (SCDA) study is a coordinated effort, led by Stuart Shaklan (JPL) and supported by NASA's Exoplanet Exploration Program (ExEP), to provide efficient coronagraph design concepts for exoplanet imaging with future segmented aperture space telescopes. This document serves as an update on the apodized vortex coronagraph designs devised by the Caltech/JPL SCDA team. Apodized vortex coronagraphs come in two flavors, where the apodization is achieved either by use of 1) a gray-scale semi-transparent pupil mask or 2) a pair of deformable mirrors in series. Each approach has attractive benefits. This document presents a comprehensive review of the former type. Future theoretical investigations will further explore the use of deformable mirrors for apodization.Comment: White Paper (2016-2017

    Bubbles emerging from a submerged granular bed

    Get PDF
    This paper explores the phenomena associated with the emergence of gas bubbles from a submerged granular bed. While there are many natural and industrial applications, we focus on the particular circumstances and consequences associated with the emergence of methane bubbles from the beds of lakes and reservoirs since there are significant implications for the dynamics of lakes and reservoirs and for global warming. This paper describes an experimental study of the processes of bubble emergence from a granular bed. Two distinct emergence modes are identified, mode 1 being simply the percolation of small bubbles through the interstices of the bed, while mode 2 involves the cumulative growth of a larger bubble until its buoyancy overcomes the surface tension effects. We demonstrate the conditions dividing the two modes (primarily the grain size) and show that this accords with simple analytical evaluations. These observations are consistent with previous studies of the dynamics of bubbles within porous beds. The two emergence modes also induce quite different particle fluidization levels. The latter are measured and correlated with a diffusion model similar to that originally employed in river sedimentation models by Vanoni and others. Both the particle diffusivity and the particle flux at the surface of the granular bed are measured and compared with a simple analytical model. These mixing processes can be consider applicable not only to the grains themselves, but also to the nutrients and/or contaminants within the bed. In this respect they are shown to be much more powerful than other mixing processes (such as the turbulence in the benthic boundary layer) and could, therefore, play a dominant role in the dynamics of lakes and reservoirs

    Annotation of Heterogenous Media Using OntoMedia

    No full text
    While ontologies exist for the annotation of monomedia, interoperability between these schemes is an important issue. The OntoMedia ontology consists of a generic core, capable of representing a diverse range of media, as well as extension ontologies to focus on specific formats. This paper provides an overview of the OntoMedia ontologies, together with a detailed case study when applied to video, a scripted form, and an associated short story

    Application of Monte Carlo Algorithms to the Bayesian Analysis of the Cosmic Microwave Background

    Get PDF
    Power spectrum estimation and evaluation of associated errors in the presence of incomplete sky coverage; non-homogeneous, correlated instrumental noise; and foreground emission is a problem of central importance for the extraction of cosmological information from the cosmic microwave background. We develop a Monte Carlo approach for the maximum likelihood estimation of the power spectrum. The method is based on an identity for the Bayesian posterior as a marginalization over unknowns. Maximization of the posterior involves the computation of expectation values as a sample average from maps of the cosmic microwave background and foregrounds given some current estimate of the power spectrum or cosmological model, and some assumed statistical characterization of the foregrounds. Maps of the CMB are sampled by a linear transform of a Gaussian white noise process, implemented numerically with conjugate gradient descent. For time series data with N_{t} samples, and N pixels on the sphere, the method has a computational expense $KO[N^{2} +- N_{t} +AFw-log N_{t}], where K is a prefactor determined by the convergence rate of conjugate gradient descent. Preconditioners for conjugate gradient descent are given for scans close to great circle paths, and the method allows partial sky coverage for these cases by numerically marginalizing over the unobserved, or removed, region.Comment: submitted to Ap

    A middleware for a large array of cameras

    No full text
    Large arrays of cameras are increasingly being employed for producing high quality image sequences needed for motion analysis research. This leads to the logistical problem with coordination and control of a large number of cameras. In this paper, we used a lightweight multi-agent system for coordinating such camera arrays. The agent framework provides more than a remote sensor access API. It allows reconfigurable and transparent access to cameras, as well as software agents capable of intelligent processing. Furthermore, it eases maintenance by encouraging code reuse. Additionally, our agent system includes an automatic discovery mechanism at startup, and multiple language bindings. Performance tests showed the lightweight nature of the framework while validating its correctness and scalability. Two different camera agents were implemented to provide access to a large array of distributed cameras. Correct operation of these camera agents was confirmed via several image processing agents

    The joint large-scale foreground-CMB posteriors of the 3-year WMAP data

    Full text link
    Using a Gibbs sampling algorithm for joint CMB estimation and component separation, we compute the large-scale CMB and foreground posteriors of the 3-yr WMAP temperature data. Our parametric data model includes the cosmological CMB signal and instrumental noise, a single power law foreground component with free amplitude and spectral index for each pixel, a thermal dust template with a single free overall amplitude, and free monopoles and dipoles at each frequency. This simple model yields a surprisingly good fit to the data over the full frequency range from 23 to 94 GHz. We obtain a new estimate of the CMB sky signal and power spectrum, and a new foreground model, including a measurement of the effective spectral index over the high-latitude sky. A particularly significant result is the detection of a common spurious offset in all frequency bands of ~ -13muK, as well as a dipole in the V-band data. Correcting for these is essential when determining the effective spectral index of the foregrounds. We find that our new foreground model is in good agreement with template-based model presented by the WMAP team, but not with their MEM reconstruction. We believe the latter may be at least partially compromised by the residual offsets and dipoles in the data. Fortunately, the CMB power spectrum is not significantly affected by these issues, as our new spectrum is in excellent agreement with that published by the WMAP team. The corresponding cosmological parameters are also virtually unchanged.Comment: 5 pages, 4 figures, submitted to ApJL. Background data are available at http://www.astro.uio.no/~hke under the Research ta

    Vertically Shifted Mixture Models for Clustering Longitudinal Data by Shape

    Get PDF
    Longitudinal studies play a prominent role in health, social and behavioral sciences as well as in the biological sciences, economics, and marketing. By following subjects over time, temporal changes in an outcome of interest can be directly observed and studied. An important question concerns the existence of distinct trajectory patterns. One way to determine these distinct patterns is through cluster analysis, which seeks to separate objects (subjects, patients, observational units) into homogeneous groups. Many methods have been adapted for longitudinal data, but almost all of them fail to explicitly group trajectories according to distinct pattern shapes. To fulfill the need for clustering based explicitly on shape, we propose vertically shifting the data by subtracting the subject-specific mean directly removes the level prior to fitting a mixture modeling. This non-invertible transformation can result in singular covariance matrixes, which makes mixture model estimation difficult. Despite the challenges, this method outperforms existing clustering methods in a simulation study
    corecore