192 research outputs found

    Quantum System Identification by Bayesian Analysis of Noisy Data: Beyond Hamiltonian Tomography

    Full text link
    We consider how to characterize the dynamics of a quantum system from a restricted set of initial states and measurements using Bayesian analysis. Previous work has shown that Hamiltonian systems can be well estimated from analysis of noisy data. Here we show how to generalize this approach to systems with moderate dephasing in the eigenbasis of the Hamiltonian. We illustrate the process for a range of three-level quantum systems. The results suggest that the Bayesian estimation of the frequencies and dephasing rates is generally highly accurate and the main source of errors are errors in the reconstructed Hamiltonian basis.Comment: 6 pages, 3 figure

    Optimisation of NMR dynamic models I. Minimisation algorithms and their performance within the model-free and Brownian rotational diffusion spaces

    Get PDF
    The key to obtaining the model-free description of the dynamics of a macromolecule is the optimisation of the model-free and Brownian rotational diffusion parameters using the collected R1, R2 and steady-state NOE relaxation data. The problem of optimising the chi-squared value is often assumed to be trivial, however, the long chain of dependencies required for its calculation complicates the model-free chi-squared space. Convolutions are induced by the Lorentzian form of the spectral density functions, the linear recombinations of certain spectral density values to obtain the relaxation rates, the calculation of the NOE using the ratio of two of these rates, and finally the quadratic form of the chi-squared equation itself. Two major topological features of the model-free space complicate optimisation. The first is a long, shallow valley which commences at infinite correlation times and gradually approaches the minimum. The most severe convolution occurs for motions on two timescales in which the minimum is often located at the end of a long, deep, curved tunnel or multidimensional valley through the space. A large number of optimisation algorithms will be investigated and their performance compared to determine which techniques are suitable for use in model-free analysis. Local optimisation algorithms will be shown to be sufficient for minimisation not only within the model-free space but also for the minimisation of the Brownian rotational diffusion tensor. In addition the performance of the programs Modelfree and Dasha are investigated. A number of model-free optimisation failures were identified: the inability to slide along the limits, the singular matrix failure of the Levenberg–Marquardt minimisation algorithm, the low precision of both programs, and a bug in Modelfree. Significantly, the singular matrix failure of the Levenberg–Marquardt algorithm occurs when internal correlation times are undefined and is greatly amplified in model-free analysis by both the grid search and constraint algorithms. The program relax (http://www.nmr-relax.com) is also presented as a new software package designed for the analysis of macromolecular dynamics through the use of NMR relaxation data and which alleviates all of the problems inherent within model-free analysis

    Understanding the adsorption process in ZIF-8 using high pressure crystallography and computational modelling

    Get PDF
    Understanding host–guest interactions and structural changes within porous materials is crucial for enhancing gas storage properties. Here, the authors combine cryogenic loading of gases with high pressure crystallography and computational techniques to obtain atomistic detail of adsorption-induced structural and energetic changes in ZIF-8

    An Open Source Simulation Model for Soil and Sediment Bioturbation

    Get PDF
    Bioturbation is one of the most widespread forms of ecological engineering and has significant implications for the structure and functioning of ecosystems, yet our understanding of the processes involved in biotic mixing remains incomplete. One reason is that, despite their value and utility, most mathematical models currently applied to bioturbation data tend to neglect aspects of the natural complexity of bioturbation in favour of mathematical simplicity. At the same time, the abstract nature of these approaches limits the application of such models to a limited range of users. Here, we contend that a movement towards process-based modelling can improve both the representation of the mechanistic basis of bioturbation and the intuitiveness of modelling approaches. In support of this initiative, we present an open source modelling framework that explicitly simulates particle displacement and a worked example to facilitate application and further development. The framework combines the advantages of rule-based lattice models with the application of parameterisable probability density functions to generate mixing on the lattice. Model parameters can be fitted by experimental data and describe particle displacement at the spatial and temporal scales at which bioturbation data is routinely collected. By using the same model structure across species, but generating species-specific parameters, a generic understanding of species-specific bioturbation behaviour can be achieved. An application to a case study and comparison with a commonly used model attest the predictive power of the approach

    Estimation of allele frequency and association mapping using next-generation sequencing data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Estimation of allele frequency is of fundamental importance in population genetic analyses and in association mapping. In most studies using next-generation sequencing, a cost effective approach is to use medium or low-coverage data (e.g., < 15<it>X</it>). However, SNP calling and allele frequency estimation in such studies is associated with substantial statistical uncertainty because of varying coverage and high error rates.</p> <p>Results</p> <p>We evaluate a new maximum likelihood method for estimating allele frequencies in low and medium coverage next-generation sequencing data. The method is based on integrating over uncertainty in the data for each individual rather than first calling genotypes. This method can be applied to directly test for associations in case/control studies. We use simulations to compare the likelihood method to methods based on genotype calling, and show that the likelihood method outperforms the genotype calling methods in terms of: (1) accuracy of allele frequency estimation, (2) accuracy of the estimation of the distribution of allele frequencies across neutrally evolving sites, and (3) statistical power in association mapping studies. Using real re-sequencing data from 200 individuals obtained from an exon-capture experiment, we show that the patterns observed in the simulations are also found in real data.</p> <p>Conclusions</p> <p>Overall, our results suggest that association mapping and estimation of allele frequencies should not be based on genotype calling in low to medium coverage data. Furthermore, if genotype calling methods are used, it is usually better not to filter genotypes based on the call confidence score.</p
    corecore