62 research outputs found

    Transition Path Theory for Markov Processes

    Get PDF
    In this thesis, we present the framework of transition path theory (TPT) for time continuous Markov processes with continuous and discrete state space. TPT provides statistical properties of the ensemble of reactive trajectories between some start and target sets and yields properties such as the committor function, the probability distribution of the reactive trajectories, their probability current and their rate of occurrence. We shown that knowing these objects allows one to arrive at a complete understanding of the mechanism of the reaction. The main objects of TPT for Markov diffusion processes are explicitly derived for the Langevin and Smoluchowski dynamics and illustrate them on a various number of low-dimensional examples. Despite the simplicity of these examples compared to those encountered in real applications, they already demonstrate the ability of TPT to handle complex dynamical scenarios. The main challenge in TPT for diffusion processes is the numerical computation of the committor function as a solution of a Dirichlet-Neumann boundary value problem involving the generator of the process. Beside the derivation of TPT for Markov jump processes, we focus on the development of efficient graph algorithms to determine reaction pathways in discrete state space. One approach via shortest-path algorithms turns out to give only a rough picture of possible reaction channels whereas the network approach allows a hierarchical decomposition of the set of reaction pathways such that the dominant channels can be identified. We successfully apply the latter approach to an example of conformational dynamics of a bio-molecule. In particular, we make use of a maximum likelihood method to estimate the infinitesimal generator of a jump process from an incomplete observation. Finally, we address the question of error propagation in the committor function computation for Markov chains

    Estimating the Sampling Error: Distribution of Transition Matrices and Functions of Transition Matrices for Given Trajectory Data

    Get PDF
    The problem of estimating a Markov transition matrix to statistically describe the dynamics underlying an observed process is frequently found in the physical and economical sciences. However, little attention has been paid to the fact that such an estimation is associated with statistical uncertainty, which depends on the number of observed transitions between metastable states. In turn, this induces uncertainties in any property computed from the transition matrix, such as stationary probabilities, committor probabilities, or eigenvalues. Assessing these uncertainties is essential for testing the reliability of a given observation and also, if possible, to plan further simulations or measurements in such a way that the most serious uncertainties will be reduced with minimal effort. Here, a rigorous statistical method is proposed to approximate the complete statistical distribution of functions of the transition matrix provided that one can identify discrete states such that the transition process between them may be modeled with a memoryless jump process, i.e., Markov dynamics. The method is based on sampling the statistical distribution of Markov transition matrices that is induced by the observed transition events. It allows the constraint of reversibility to be included, which is physically meaningful in many applications. The method is illustrated on molecular dynamics simulations of a hexapeptide that are modeled by a Markov transition process between the metastable states. For this model the distributions and uncertainties of the stationary probabilities of metastable states, the transition matrix elements, the committor probabilities, and the transition matrix eigenvalues are estimated. It is found that the detailed balance constraint can significantly alter the distribution of some observables

    Observation Uncertainty in Reversible Markov Chains

    Get PDF
    In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov Chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+)

    Generator Estimation of Markov Jump Processes Based on Incomplete Observations Nonequidistant in Time

    Get PDF
    Markov jump processes can be used to model the effective dynamics of observables in applications ranging from molecular dynamics to finance. In this paper we present a different method which allows the inverse modeling of Markov jump processes based on incomplete observations in time: We consider the case of a given time series of the discretely observed jump process. We show how to compute efficiently the maximum likelihood estimator of its infinitesimal generator and demonstrate in detail that the method allows us to handle observations nonequidistant in time. The method is based on the work of and Bladt and Sørensen [J. R. Stat. Soc. Ser. B (Stat. Methodol.) 67, 395 (2005)] but scales much more favorably than it with the length of the time series and the dimension and size of the state space of the jump process. We illustrate its performance on a toy problem as well as on data arising from simulations of biochemical kinetics of a genetic toggle switch

    Illustration of Transition Path Theory on a Collection of Simple Examples

    Get PDF
    Transition path theory (TPT) has been recently introduced as a theoretical framework to describe the reaction pathways of rare events between long lived states in complex systems. TPT gives detailed statistical information about the reactive trajectories involved in these rare events, which are beyond the realm of transition state theory or transition path sampling. In this paper the TPT approach is outlined, its distinction from other approaches is discussed, and, most importantly, the main insights and objects provided by TPT are illustrated in detail via a series of low dimensional test problems

    Transition Path Theory for Markov Jump Processes

    Get PDF
    The framework of transition path theory (TPT) is developed in the context of continuous-time Markov chains on discrete state-spaces. Under assumption of ergodicity, TPT singles out any two subsets in the state-space and analyzes the statistical properties of the associated reactive trajectories, i.e., those trajectories by which the random walker transits from one subset to another. TPT gives properties such as the probability distribution of the reactive trajectories, their probability current and flux, and their rate of occurrence and the dominant reaction pathways. In this paper the framework of TPT for Markov chains is developed in detail, and the relation of the theory to electric resistor network theory and data analysis tools such as Laplacian eigenmaps and diffusion maps is discussed as well. Various algorithms for the numerical calculation of the various objects in TPT are also introduced. Finally, the theory and the algorithms are illustrated in several examples

    A Structure-preserving numerical discretization of reversible diffusions

    Get PDF
    We propose a robust and efficient numerical discretization scheme for the infinitesimal generator of a diffusion process based on a finite volume approximation. The resulting discrete-space operator can be interpreted as a jump process on the mesh whose invariant measure is precisely the cell approximation of the Boltzmann distribution of the original process. Moreover the resulting jump process preserves the detailed balance property of the original stochastic process

    Conformation Dynamics

    Get PDF
    This article surveys the present state of the transfer operator approach to the effective dynamics of metastable dynamical systems and the variety of algorithms associated with it

    Macroscopic Dynamics of Complex Metastable Systems: Theory, Algorithms, and Application to B-DNA

    Get PDF
    This article is a survey of the present state of the transfer operator approach to the effective dynamics of metastable complex systems, and the variety of algorithms associated with it. We introduce new methods, and we emphasize both the conceptional foundations and the concrete application to the conformation dynamics of a biomolecular system. The algorithmic aspects are illustrated by means of several examples of various degrees of complexity, culminating in their application to a full-scale molecular dynamics simulation of a B-DNA oligomer
    • …
    corecore