178 research outputs found

    Smoothing-based Compressed State Kalman Filter for Joint State-parameter Estimation: Applications in Reservoir Characterization and CO2 Storage Monitoring

    Get PDF
    The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts

    The Compressed State Kalman Filter for Nonlinear State Estimation: Application to Large-Scale Reservoir Monitoring

    Get PDF
    Reservoir monitoring aims to provide snapshots of reservoir conditions and their uncertainties to assist operation management and risk analysis. These snapshots may contain millions of state variables, e.g., pressures and saturations, which can be estimated by assimilating data in real time using the Kalman filter (KF). However, the KF has a computational cost that scales quadratically with the number of unknowns, m, due to the cost of computing and storing the covariance and Jacobian matrices, along with their products. The compressed state Kalman filter (CSKF) adapts the KF for solving large-scale monitoring problems. The CSKF uses N preselected orthogonal bases to compute an accurate rank-N approximation of the covariance that is close to the optimal spectral approximation given by SVD. The CSKF has a computational cost that scales linearly in m and uses an efficient matrix-free approach that propagates uncertainties using N + 1 forward model evaluations, where . Here we present a generalized CSKF algorithm for nonlinear state estimation problems such as CO2 monitoring. For simultaneous estimation of multiple types of state variables, the algorithm allows selecting bases that represent the variability of each state type. Through synthetic numerical experiments of CO2 monitoring, we show that the CSKF can reproduce the Kalman gain accurately even for large compression ratios (m/N). For a given computational cost, the CSKF uses a robust and flexible compression scheme that gives more reliable uncertainty estimates than the ensemble Kalman filter, which may display loss of ensemble variability leading to suboptimal uncertainty estimates

    Constraint methods for determining pathways and free energy of activated processes

    Full text link
    Activated processes from chemical reactions up to conformational transitions of large biomolecules are hampered by barriers which are overcome only by the input of some free energy of activation. Hence, the characteristic and rate-determining barrier regions are not sufficiently sampled by usual simulation techniques. Constraints on a reaction coordinate r have turned out to be a suitable means to explore difficult pathways without changing potential function, energy or temperature. For a dense sequence of values of r, the corresponding sequence of simulations provides a pathway for the process. As only one coordinate among thousands is fixed during each simulation, the pathway essentially reflects the system's internal dynamics. From mean forces the free energy profile can be calculated to obtain reaction rates and insight in the reaction mechanism. In the last decade, theoretical tools and computing capacity have been developed to a degree where simulations give impressive qualitative insight in the processes at quantitative agreement with experiments. Here, we give an introduction to reaction pathways and coordinates, and develop the theory of free energy as the potential of mean force. We clarify the connection between mean force and constraint force which is the central quantity evaluated, and discuss the mass metric tensor correction. Well-behaved coordinates without tensor correction are considered. We discuss the theoretical background and practical implementation on the example of the reaction coordinate of targeted molecular dynamics simulation. Finally, we compare applications of constraint methods and other techniques developed for the same purpose, and discuss the limits of the approach

    Dimension reduction for systems with slow relaxation

    Full text link
    We develop reduced, stochastic models for high dimensional, dissipative dynamical systems that relax very slowly to equilibrium and can encode long term memory. We present a variety of empirical and first principles approaches for model reduction, and build a mathematical framework for analyzing the reduced models. We introduce the notions of universal and asymptotic filters to characterize `optimal' model reductions for sloppy linear models. We illustrate our methods by applying them to the practically important problem of modeling evaporation in oil spills.Comment: 48 Pages, 13 figures. Paper dedicated to the memory of Leo Kadanof

    A nonlinear Lagrangian particle model for grains assemblies including grain relative rotations

    Get PDF
    International audienceWe formulate a discrete Lagrangian model for a set of interacting grains, which is purely elastic. The considered degrees of freedom for each grain include placement of barycenter and rotation. Further, we limit the study to the case of planar systems. A representative grain radius is introduced to express the deformation energy to be associated to relative displacements and rotations of interacting grains. We distinguish inter‐grains elongation/compression energy from inter‐grains shear and rotations energies, and we consider an exact finite kinematics in which grain rotations are independent of grain displacements. The equilibrium configurations of the grain assembly are calculated by minimization of deformation energy for selected imposed displacements and rotations at the boundaries. Behaviours of grain assemblies arranged in regular patterns, without and with defects, and similar mechanical properties are simulated. The values of shear, rotation, and compression elastic moduli are varied to investigate the shapes and thicknesses of the layers where deformation energy, relative displacement, and rotations are concentrated. It is found that these concentration bands are close to the boundaries and in correspondence of grain voids. The obtained results question the possibility of introducing a first gradient continuum models for granular media and justify the development of both numerical and theoretical methods for including frictional, plasticity, and damage phenomena in the proposed model

    Using metadynamics to explore complex free-energy landscapes

    Get PDF
    Metadynamics is an atomistic simulation technique that allows, within the same framework, acceleration of rare events and estimation of the free energy of complex molecular systems. It is based on iteratively \u2018filling\u2019 the potential energy of the system by a sum of Gaussians centred along the trajectory followed by a suitably chosen set of collective variables (CVs), thereby forcing the system to migrate from one minimum to the next. The power of metadynamics is demonstrated by the large number of extensions and variants that have been developed. The first scope of this Technical Review is to present a critical comparison of these variants, discussing their advantages and disadvantages. The effectiveness of metadynamics, and that of the numerous alternative methods, is strongly influenced by the choice of the CVs. If an important variable is neglected, the resulting estimate of the free energy is unreliable, and predicted transition mechanisms may be qualitatively wrong. The second scope of this Technical Review is to discuss how the CVs should be selected, how to verify whether the chosen CVs are sufficient or redundant, and how to iteratively improve the CVs using machine learning approaches

    Accurate multiple time step in biased molecular simulations

    Get PDF
    Many recently introduced enhanced sampling techniques are based on biasing coarse descriptors (collective variables) of a molecular system on the fly. Sometimes the calculation of such collective variables is expensive and becomes a bottleneck in molecular dynamics simulations. An algorithm to treat smooth biasing forces within a multiple time step framework is here discussed. The implementation is simple and allows a speed up when expensive collective variables are employed. The gain can be substantial when using massively parallel or GPU-based molecular dynamics software. Moreover, a theoretical framework to assess the sampling accuracy is introduced, which can be used to assess the choice of the integration time step in both single and multiple time step biased simulations

    Gamma estimator of Jarzynski equality for recovering binding energies from noisy dynamic data sets

    Get PDF
    A fundamental problem in thermodynamics is the recovery of macroscopic equilibrated interaction energies from experimentally measured single-molecular interactions. The Jarzynski equality forms a theoretical basis in recovering the free energy difference between two states from exponentially averaged work performed to switch the states. In practice, the exponentially averaged work value is estimated as the mean of finite samples. Numerical simulations have shown that samples having thousands of measurements are not large enough for the mean to converge when the fluctuation of external work is above 4 kBT, which is easily observable in biomolecular interactions. We report the first example of a statistical gamma work distribution applied to single molecule pulling experiments. The Gibbs free energy of surface adsorption can be accurately evaluated even for a small sample size. The values obtained are comparable to those derived from multi-parametric surface plasmon resonance measurements and molecular dynamics simulations
    • 

    corecore