88 research outputs found

    Slow collective variables and molecular kinetics from short off-equilibrium simulations

    Get PDF
    Markov state models (MSMs) and master equation models are popular approaches to approximate molecular kinetics, equilibria, metastable states, and reaction coordinates in terms of a state space discretization usually obtained by clustering. Recently, a powerful generalization of MSMs has been introduced, the variational approach conformation dynamics/molecular kinetics (VAC) and its special case the time-lagged independent component analysis (TICA), which allow us to approximate slow collective variables and molecular kinetics by linear combinations of smooth basis functions or order parameters. While it is known how to estimate MSMs from trajectories whose starting points are not sampled from an equilibrium ensemble, this has not yet been the case for TICA and the VAC. Previous estimates from short trajectories have been strongly biased and thus not variationally optimal. Here, we employ the Koopman operator theory and the ideas from dynamic mode decomposition to extend the VAC and TICA to non-equilibrium data. The main insight is that the VAC and TICA provide a coefficient matrix that we call Koopman model, as it approximates the underlying dynamical (Koopman) operator in conjunction with the basis set used. This Koopman model can be used to compute a stationary vector to reweight the data to equilibrium. From such a Koopman-reweighted sample, equilibrium expectation values and variationally optimal reversible Koopman models can be constructed even with short simulations. The Koopman model can be used to propagate densities, and its eigenvalue decomposition provides estimates of relaxation time scales and slow collective variables for dimension reduction. Koopman models are generalizations of Markov state models, TICA, and the linear VAC and allow molecular kinetics to be described without a cluster discretization

    Efficient Magnus-type integrators for solar energy conversion in Hubbard models

    Get PDF
    Strongly interacting electrons in solids are generically described by Hubbardtype models, and the impact of solar light can be modeled by an additional time-dependence. This yields a finite dimensional system of ordinary differential equations (ODE)s of Schr\"odinger type, which can be solved numerically by exponential time integrators of Magnus type. The efficiency may be enhanced by combining these with operator splittings. We will discuss several different approaches of employing exponential-based methods in conjunction with an adaptive Lanczos method for the evaluation of matrix exponentials and compare their accuracy and efficiency. For each integrator, we use defect-based local error estimators to enable adaptive time-stepping. This serves to reliably control the approximation error and reduce the computational effor

    A relative entropy rate method for path space sensitivity analysis of stationary complex stochastic dynamics

    Get PDF
    We propose a new sensitivity analysis methodology for complex stochastic dynamics based on the Relative Entropy Rate. The method becomes computationally feasible at the stationary regime of the process and involves the calculation of suitable observables in path space for the Relative Entropy Rate and the corresponding Fisher Information Matrix. The stationary regime is crucial for stochastic dynamics and here allows us to address the sensitivity analysis of complex systems, including examples of processes with complex landscapes that exhibit metastability, non-reversible systems from a statistical mechanics perspective, and high-dimensional, spatially distributed models. All these systems exhibit, typically non-gaussian stationary probability distributions, while in the case of high-dimensionality, histograms are impossible to construct directly. Our proposed methods bypass these challenges relying on the direct Monte Carlo simulation of rigorously derived observables for the Relative Entropy Rate and Fisher Information in path space rather than on the stationary probability distribution itself. We demonstrate the capabilities of the proposed methodology by focusing here on two classes of problems: (a) Langevin particle systems with either reversible (gradient) or non-reversible (non-gradient) forcing, highlighting the ability of the method to carry out sensitivity analysis in non-equilibrium systems; and, (b) spatially extended Kinetic Monte Carlo models, showing that the method can handle high-dimensional problems

    Graph similarity through entropic manifold alignment

    Get PDF
    In this paper we decouple the problem of measuring graph similarity into two sequential steps. The first step is the linearization of the quadratic assignment problem (QAP) in a low-dimensional space, given by the embedding trick. The second step is the evaluation of an information-theoretic distributional measure, which relies on deformable manifold alignment. The proposed measure is a normalized conditional entropy, which induces a positive definite kernel when symmetrized. We use bypass entropy estimation methods to compute an approximation of the normalized conditional entropy. Our approach, which is purely topological (i.e., it does not rely on node or edge attributes although it can potentially accommodate them as additional sources of information) is competitive with state-of-the-art graph matching algorithms as sources of correspondence-based graph similarity, but its complexity is linear instead of cubic (although the complexity of the similarity measure is quadratic). We also determine that the best embedding strategy for graph similarity is provided by commute time embedding, and we conjecture that this is related to its inversibility property, since the inverse of the embeddings obtained using our method can be used as a generative sampler of graph structure.The work of the first and third authors was supported by the projects TIN2012-32839 and TIN2015-69077-P of the Spanish Government. The work of the second author was supported by a Royal Society Wolfson Research Merit Award

    Deep learning Markov and Koopman models with physical constraints

    Full text link
    The long-timescale behavior of complex dynamical systems can be described by linear Markov or Koopman models in a suitable latent space. Recent variational approaches allow the latent space representation and the linear dynamical model to be optimized via unsupervised machine learning methods. Incorporation of physical constraints such as time-reversibility or stochasticity into the dynamical model has been established for a linear, but not for arbitrarily nonlinear (deep learning) representations of the latent space. Here we develop theory and methods for deep learning Markov and Koopman models that can bear such physical constraints. We prove that the model is an universal approximator for reversible Markov processes and that it can be optimized with either maximum likelihood or the variational approach of Markov processes (VAMP). We demonstrate that the model performs equally well for equilibrium and systematically better for biased data compared to existing approaches, thus providing a tool to study the long-timescale processes of dynamical systems

    Molecular Dynamics Simulation

    Get PDF
    Condensed matter systems, ranging from simple fluids and solids to complex multicomponent materials and even biological matter, are governed by well understood laws of physics, within the formal theoretical framework of quantum theory and statistical mechanics. On the relevant scales of length and time, the appropriate ‘first-principles’ description needs only the Schroedinger equation together with Gibbs averaging over the relevant statistical ensemble. However, this program cannot be carried out straightforwardly—dealing with electron correlations is still a challenge for the methods of quantum chemistry. Similarly, standard statistical mechanics makes precise explicit statements only on the properties of systems for which the many-body problem can be effectively reduced to one of independent particles or quasi-particles. [...

    Information Geometry

    Get PDF
    This Special Issue of the journal Entropy, titled “Information Geometry I”, contains a collection of 17 papers concerning the foundations and applications of information geometry. Based on a geometrical interpretation of probability, information geometry has become a rich mathematical field employing the methods of differential geometry. It has numerous applications to data science, physics, and neuroscience. Presenting original research, yet written in an accessible, tutorial style, this collection of papers will be useful for scientists who are new to the field, while providing an excellent reference for the more experienced researcher. Several papers are written by authorities in the field, and topics cover the foundations of information geometry, as well as applications to statistics, Bayesian inference, machine learning, complex systems, physics, and neuroscience
    corecore