50 research outputs found

    Evaluating hospital infection control measures for antimicrobial-resistant pathogens using stochastic transmission models: Application to vancomycin-resistant enterococci in intensive care units

    Get PDF
    Nosocomial pathogens such as methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococci (VRE) are the cause of significant morbidity and mortality among hospital patients. It is important to be able to assess the efficacy of control measures using data on patient outcomes. In this paper, we describe methods for analysing such data using patient-level stochastic models which seek to describe the underlying unobserved process of transmission. The methods are applied to detailed longitudinal patient-level data on vancomycin-resistant Enterococci from a study in a US hospital with eight intensive care units (ICUs). The data comprise admission and discharge dates, dates and results of screening tests, and dates during which precautionary measures were in place for each patient during the study period. Results include estimates of the efficacy of the control measures, the proportion of unobserved patients colonized with vancomycin-resistant Enterococci, and the proportion of patients colonized on admission. </jats:p

    Comparative Judgement Modeling to Map Forced Marriage at Local Levels

    Full text link
    Forcing someone into marriage against their will is a violation of their human rights. In 2021, the county of Nottinghamshire, UK, launched a strategy to tackle forced marriage and violence against women and girls. However, accessing information about where victims are located in the county could compromise their safety, so it is not possible to develop interventions for different areas of the county. Comparative judgement studies offer a way to map the risk of human rights abuses without collecting data that could compromise victim safety. Current methods require studies to have a large number of participants, so we develop a comparative judgement model that provides a more flexible spatial modelling structure and a mechanism to schedule comparisons more effectively. The methods reduce the data collection burden on participants and make a comparative judgement study feasible with a small number of participants. Underpinning these methods is a latent variable representation that improves on the scalability of previous comparative judgement models. We use these methods to map the risk of forced marriage across Nottinghamshire thereby supporting the county's strategy for tackling violence against women and girls.Comment: Submitted. 31 pages, 8 figure

    Rank-based model selection for multiple ions quantum tomography

    Get PDF
    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the "sparsity" properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods -- the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) -- to models consising of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of 4 ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a 4 ions experiment aimed at creating a Smolin state of rank 4. The two methods indicate that the optimal model for describing the data lies between ranks 6 and 9, and the Pearson χ2\chi^{2} test is applied to validate this conclusion. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements.Comment: 24 pages, 6 figures, 3 table

    Spectral thresholding quantum tomography for low rank states

    Get PDF
    The estimation of high dimensional quantum states is an important statistical problem arising in current quantum technology applications. A key example is the tomography of multiple ions states, employed in the validation of state preparation in ion trap experiments (Häffner et al 2005 Nature 438 643). Since full tomography becomes unfeasible even for a small number of ions, there is a need to investigate lower dimensional statistical models which capture prior information about the state, and to devise estimation methods tailored to such models. In this paper we propose several new methods aimed at the efficient estimation of low rank states and analyse their performance for multiple ions tomography. All methods consist in first computing the least squares estimator, followed by its truncation to an appropriately chosen smaller rank. The latter is done by setting eigenvalues below a certain 'noise level' to zero, while keeping the rest unchanged, or normalizing them appropriately. We show that (up to logarithmic factors in the space dimension) the mean square error of the resulting estimators scales as where r is the rank, is the dimension of the Hilbert space, and N is the number of quantum samples. Furthermore we establish a lower bound for the asymptotic minimax risk which shows that the above scaling is optimal. The performance of the estimators is analysed in an extensive simulations study, with emphasis on the dependence on the state rank, and the number of measurement repetitions. We find that all estimators perform significantly better than the least squares, with the 'physical estimator' (which is a bona fide density matrix) slightly outperforming the other estimators

    Neuroimaging biomarkers predict brain structural connectivity change in a mouse model of vascular cognitive impairment

    Get PDF
    Background and Purpose - Chronic hypoperfusion in the mouse brain has been suggested to mimic aspects of vascular cognitive impairment, such as white matter damage. Although this model has attracted attention, our group has struggled to generate a reliable cognitive and pathological phenotype. This study aimed to identify neuroimaging biomarkers of brain pathology in aged, more severely hypoperfused mice. Methods - We used magnetic resonance imaging to characterize brain degeneration in mice hypoperfused by refining the surgical procedure to use the smallest reported diameter microcoils (160 μm). Results - Acute cerebral blood flow decreases were observed in the hypoperfused group that recovered over 1 month and coincided with arterial remodeling. Increasing hypoperfusion resulted in a reduction in spatial learning abilities in the water maze that has not been previously reported. We were unable to observe severe white matter damage with histology, but a novel approach to analyze diffusion tensor imaging data, graph theory, revealed substantial reorganization of the hypoperfused brain network. A logistic regression model from the data revealed that 3 network parameters were particularly efficient at predicting group membership (global and local efficiency and degrees), and clustering coefficient was correlated with performance in the water maze. Conclusions - Overall, these findings suggest that, despite the autoregulatory abilities of the mouse brain to compensate for a sudden decrease in blood flow, there is evidence of change in the brain networks that can be used as neuroimaging biomarkers to predict outcome

    Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators

    Get PDF
    Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. First, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e., the sample complexity of tomography decreases with the rank. Second, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. We give a new theoretical analysis of compressed tomography, based on the restricted isometry property (RIP) for low-rank matrices. Using these tools, we obtain near-optimal error bounds, for the realistic situation where the data contains noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper-bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher-fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low rank estimate using direct fidelity estimation and we describe a method for compressed quantum process tomography that works for processes with small Kraus rank.Comment: 16 pages, 3 figures. Matlab code included with the source file

    Piecewise Approximate Bayesian Computation: fast inference for discretely observed Markov models using a factorised posterior distribution

    Get PDF
    Many modern statistical applications involve inference for complicated stochastic models for which the likelihood function is difficult or even impossible to calculate, and hence conventional likelihood-based inferential techniques cannot be used. In such settings, Bayesian inference can be performed using Approximate Bayesian Computation (ABC). However, in spite of many recent developments to ABC methodology, in many applications the computational cost of ABC necessitates the choice of summary statistics and tolerances that can potentially severely bias the estimate of the posterior. We propose a new “piecewise” ABC approach suitable for discretely observed Markov models that involves writing the posterior density of the parameters as a product of factors, each a function of only a subset of the data, and then using ABC within each factor. The approach has the advantage of side-stepping the need to choose a summary statistic and it enables a stringent tolerance to be set, making the posterior “less approximate”. We investigate two methods for estimating the posterior density based on ABC samples for each of the factors: the first is to use a Gaussian approximation for each factor, and the second is to use a kernel density estimate. Both methods have their merits. The Gaussian approximation is simple, fast, and probably adequate for many applications. On the other hand, using instead a kernel density estimate has the benefit of consistently estimating the true piecewise ABC posterior as the number of ABC samples tends to infinity. We illustrate the piecewise ABC approach with four examples; in each case, the approach offers fast and accurate inference

    Quantifying Type-Specific Reproduction Numbers for Nosocomial Pathogens: Evidence for Heightened Transmission of an Asian Sequence Type 239 MRSA Clone

    Get PDF
    An important determinant of a pathogen's success is the rate at which it is transmitted from infected to susceptible hosts. Although there are anecdotal reports that methicillin-resistant Staphylococcus aureus (MRSA) clones vary in their transmissibility in hospital settings, attempts to quantify such variation are lacking for common subtypes, as are methods for addressing this question using routinely-collected MRSA screening data in endemic settings. Here we present a method to quantify the time-varying transmissibility of different subtypes of common bacterial nosocomial pathogens using routine surveillance data. The method adapts approaches for estimating reproduction numbers based on the probabilistic reconstruction of epidemic trees, but uses relative hazards rather than serial intervals to assign probabilities to different sources for observed transmission events. The method is applied to data collected as part of a retrospective observational study of a concurrent MRSA outbreak in the United Kingdom with dominant endemic MRSA clones (ST22 and ST36) and an Asian ST239 MRSA strain (ST239-TW) in two linked adult intensive care units, and compared with an approach based on a fully parametric transmission model. The results provide support for the hypothesis that the clones responded differently to an infection control measure based on the use of topical antiseptics, which was more effective at reducing transmission of endemic clones. They also suggest that in one of the two ICUs patients colonized or infected with the ST239-TW MRSA clone had consistently higher risks of transmitting MRSA to patients free of MRSA. These findings represent some of the first quantitative evidence of enhanced transmissibility of a pandemic MRSA lineage, and highlight the potential value of tailoring hospital infection control measures to specific pathogen subtypes
    corecore