18 research outputs found

    Combinatorial Mori-Zwanzig Theory

    Full text link
    We introduce a combinatorial version Mori-Zwanzig theory and develop from it a family of self-consistent evolution equations for the correlation function or Green's function of interactive many-body systems. The core idea is to use an ansatz to rewrite the memory kernel (self-energy) of the regular Mori-Zwanzig equation as a function composition of the correlation (Green's) function. Then a series of algebraic combinatorial tools, especially the commutative and noncommutative Bell polynomials, are used to determine the exact Taylor series expansion of the composition function. The resulting combinatorial Mori-Zwanzig equation (CMZE) yields novel non-perturbative expansions of the equation of motion for the correlation (Green's) function. The structural equation for deriving such a combinatorial expansion resembles the combinatorial Dyson-Schwinger equation and may be viewed as its temporal-domain analogue. After introducing the abstract word and tree representation of the CMZE, we show its wide-range application in classical, stochastic, and quantum many-body systems. In all these examples, the new self-consistent expansions we obtained with the CMZE are similar to the diagrammatic skeleton expansions used in quantum many-body theory and lattice statistical field theory. We expect such a new framework can be used to calculate the correlation (Green's) function for strongly correlated/interactive many-body systems

    Hypoellipticity and the Mori-Zwanzig formulation of stochastic differential equations

    Full text link
    We develop a thorough mathematical analysis of the effective Mori-Zwanzig (EMZ) equation governing the dynamics of noise-averaged observables in stochastic differential equations driven by multiplicative Gaussian white noise. Building upon recent work on hypoelliptic operators, we prove that the EMZ memory kernel and fluctuation terms converge exponentially fast in time to a unique equilibrium state which admits an explicit representation. We apply the new theoretical results to the Langevin dynamics of a high-dimensional particle system with smooth interaction potential.Comment: 22 pages, 1 figur

    Learning Stochastic Dynamics with Statistics-Informed Neural Network

    Full text link
    We introduce a machine-learning framework named statistics-informed neural network (SINN) for learning stochastic dynamics from data. This new architecture was theoretically inspired by a universal approximation theorem for stochastic systems, which we introduce in this paper, and the projection-operator formalism for stochastic modeling. We devise mechanisms for training the neural network model to reproduce the correct \emph{statistical} behavior of a target stochastic process. Numerical simulation results demonstrate that a well-trained SINN can reliably approximate both Markovian and non-Markovian stochastic dynamics. We demonstrate the applicability of SINN to coarse-graining problems and the modeling of transition dynamics. Furthermore, we show that the obtained reduced-order model can be trained on temporally coarse-grained data and hence is well suited for rare-event simulations

    Detecting Label Noise via Leave-One-Out Cross-Validation

    Full text link
    We present a simple algorithm for identifying and correcting real-valued noisy labels from a mixture of clean and corrupted sample points using Gaussian process regression. A heteroscedastic noise model is employed, in which additive Gaussian noise terms with independent variances are associated with each and all of the observed labels. Optimizing the noise model using maximum likelihood estimation leads to the containment of the GPR model's predictive error by the posterior standard deviation in leave-one-out cross-validation. A multiplicative update scheme is proposed for solving the maximum likelihood estimation problem under non-negative constraints. While we provide proof of convergence for certain special cases, the multiplicative scheme has empirically demonstrated monotonic convergence behavior in virtually all our numerical experiments. We show that the presented method can pinpoint corrupted sample points and lead to better regression models when trained on synthetic and real-world scientific data sets
    corecore