2,090 research outputs found

    Dwell time symmetry in random walks and molecular motors

    Get PDF
    The statistics of steps and dwell times in reversible molecular motors differ from those of cycle completion in enzyme kinetics. The reason is that a step is only one of several transitions in the mechanochemical cycle. As a result, theoretical results for cycle completion in enzyme kinetics do not apply to stepping data. To allow correct parameter estimation, and to guide data analysis and experiment design, a theoretical treatment is needed that takes this observation into account. In this paper, we model the distribution of dwell times and number of forward and backward steps using first passage processes, based on the assumption that forward and backward steps correspond to different directions of the same transition. We extend recent results for systems with a single cycle and consider the full dwell time distributions as well as models with multiple pathways, detectable substeps, and detachments. Our main results are a symmetry relation for the dwell time distributions in reversible motors, and a relation between certain relative step frequencies and the free energy per cycle. We demonstrate our results by analyzing recent stepping data for a bacterial flagellar motor, and discuss the implications for the efficiency and reversibility of the force-generating subunits. Key words: motor proteins; single molecule kinetics; enzyme kinetics; flagellar motor; Markov process; non-equilibrium fluctuations.Comment: revtex, 15 pages, 8 figures, 2 tables. v2: Minor revision, corrected typos, added references, and moved mathematical parts to new appendice

    On particle filters applied to electricity load forecasting

    Get PDF
    We are interested in the online prediction of the electricity load, within the Bayesian framework of dynamic models. We offer a review of sequential Monte Carlo methods, and provide the calculations needed for the derivation of so-called particles filters. We also discuss the practical issues arising from their use, and some of the variants proposed in the literature to deal with them, giving detailed algorithms whenever possible for an easy implementation. We propose an additional step to help make basic particle filters more robust with regard to outlying observations. Finally we use such a particle filter to estimate a state-space model that includes exogenous variables in order to forecast the electricity load for the customers of the French electricity company \'Electricit\'e de France and discuss the various results obtained

    Efficient Cosmological Parameter Estimation from Microwave Background Anisotropies

    Full text link
    We revisit the issue of cosmological parameter estimation in light of current and upcoming high-precision measurements of the cosmic microwave background power spectrum. Physical quantities which determine the power spectrum are reviewed, and their connection to familiar cosmological parameters is explicated. We present a set of physical parameters, analytic functions of the usual cosmological parameters, upon which the microwave background power spectrum depends linearly (or with some other simple dependence) over a wide range of parameter values. With such a set of parameters, microwave background power spectra can be estimated with high accuracy and negligible computational effort, vastly increasing the efficiency of cosmological parameter error determination. The techniques presented here allow calculation of microwave background power spectra 10510^5 times faster than comparably accurate direct codes (after precomputing a handful of power spectra). We discuss various issues of parameter estimation, including parameter degeneracies, numerical precision, mapping between physical and cosmological parameters, and systematic errors, and illustrate these considerations with an idealized model of the MAP experiment.Comment: 22 pages, 12 figure

    Hidden Markov Model with Binned Duration and Its Application

    Get PDF
    Hidden Markov models (HMM) have been widely used in various applications such as speech processing and bioinformatics. However, the standard hidden Markov model requires state occupancy durations to be geometrically distributed, which can be inappropriate in some real-world applications where the distributions on state intervals deviate signi cantly from the geometric distribution, such as multi-modal distributions and heavy-tailed distributions. The hidden Markov model with duration (HMMD) avoids this limitation by explicitly incor- porating the appropriate state duration distribution, at the price of signi cant computational expense. As a result, the applications of HMMD are still quited limited. In this work, we present a new algorithm - Hidden Markov Model with Binned Duration (HMMBD), whose result shows no loss of accuracy compared to the HMMD decoding performance and a com- putational expense that only diers from the much simpler and faster HMM decoding by a constant factor. More precisely, we further improve the computational complexity of HMMD from (TNN +TND) to (TNN +TND ), where TNN stands for the computational com- plexity of the HMM, D is the max duration value allowed and can be very large and D generally could be a small constant value

    Uncertainty quantification in ocean state estimation

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution February 2013Quantifying uncertainty and error bounds is a key outstanding challenge in ocean state estimation and climate research. It is particularly difficult due to the large dimensionality of this nonlinear estimation problem and the number of uncertain variables involved. The “Estimating the Circulation and Climate of the Oceans” (ECCO) consortium has developed a scalable system for dynamically consistent estimation of global time-evolving ocean state by optimal combination of ocean general circulation model (GCM) with diverse ocean observations. The estimation system is based on the "adjoint method" solution of an unconstrained least-squares optimization problem formulated with the method of Lagrange multipliers for fitting the dynamical ocean model to observations. The dynamical consistency requirement of ocean state estimation necessitates this approach over sequential data assimilation and reanalysis smoothing techniques. In addition, it is computationally advantageous because calculation and storage of large covariance matrices is not required. However, this is also a drawback of the adjoint method, which lacks a native formalism for error propagation and quantification of assimilated uncertainty. The objective of this dissertation is to resolve that limitation by developing a feasible computational methodology for uncertainty analysis in dynamically consistent state estimation, applicable to the large dimensionality of global ocean models. Hessian (second derivative-based) methodology is developed for Uncertainty Quantification (UQ) in large-scale ocean state estimation, extending the gradient-based adjoint method to employ the second order geometry information of the model-data misfit function in a high-dimensional control space. Large error covariance matrices are evaluated by inverting the Hessian matrix with the developed scalable matrix-free numerical linear algebra algorithms. Hessian-vector product and Jacobian derivative codes of the MIT general circulation model (MITgcm) are generated by means of algorithmic differentiation (AD). Computational complexity of the Hessian code is reduced by tangent linear differentiation of the adjoint code, which preserves the speedup of adjoint checkpointing schemes in the second derivative calculation. A Lanczos algorithm is applied for extracting the leading rank eigenvectors and eigenvalues of the Hessian matrix. The eigenvectors represent the constrained uncertainty patterns. The inverse eigenvalues are the corresponding uncertainties. The dimensionality of UQ calculations is reduced by eliminating the uncertainty null-space unconstrained by the supplied observations. Inverse and forward uncertainty propagation schemes are designed for assimilating observation and control variable uncertainties, and for projecting these uncertainties onto oceanographic target quantities. Two versions of these schemes are developed: one evaluates reduction of prior uncertainties, while another does not require prior assumptions. The analysis of uncertainty propagation in the ocean model is time-resolving. It captures the dynamics of uncertainty evolution and reveals transient and stationary uncertainty regimes. The system is applied to quantifying uncertainties of Antarctic Circumpolar Current (ACC) transport in a global barotropic configuration of the MITgcm. The model is constrained by synthetic observations of sea surface height and velocities. The control space consists of two-dimensional maps of initial and boundary conditions and model parameters. The size of the Hessian matrix is O(1010) elements, which would require O(60GB) of uncompressed storage. It is demonstrated how the choice of observations and their geographic coverage determines the reduction in uncertainties of the estimated transport. The system also yields information on how well the control fields are constrained by the observations. The effects of controls uncertainty reduction due to decrease of diagonal covariance terms are compared to dynamical coupling of controls through off-diagonal covariance terms. The correlations of controls introduced by observation uncertainty assimilation are found to dominate the reduction of uncertainty of transport. An idealized analytical model of ACC guides a detailed time-resolving understanding of uncertainty dynamics.This thesis was supported in part by the National Science Foundation (NSF) Collaboration in Mathematical Geosciences (CMG) grant ARC-0934404, and the Department of Energy (DOE) ISICLES initiative under LANL sub-contract 139843-1. Partial funding was provided by the department of Mechanical Engineering at MIT and by the Academic Programs Office at WHOI. My participation in the IMA "Large-scale Inverse Problems and Quantification of Uncertainty" workshop was partially funded by IMA NSF grants

    A Probabilistic Model of Local Sequence Alignment That Simplifies Statistical Significance Estimation

    Get PDF
    Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments
    • 

    corecore