506 research outputs found

    Adaptive Thermostats for Noisy Gradient Systems

    Get PDF
    We study numerical methods for sampling probability measures in high dimension where the underlying model is only approximately identified with a gradient system. Extended stochastic dynamical methods are discussed which have application to multiscale models, nonequilibrium molecular dynamics, and Bayesian sampling techniques arising in emerging machine learning applications. In addition to providing a more comprehensive discussion of the foundations of these methods, we propose a new numerical method for the adaptive Langevin/stochastic gradient Nos\'{e}--Hoover thermostat that achieves a dramatic improvement in numerical efficiency over the most popular stochastic gradient methods reported in the literature. We also demonstrate that the newly established method inherits a superconvergence property (fourth order convergence to the invariant measure for configurational quantities) recently demonstrated in the setting of Langevin dynamics. Our findings are verified by numerical experiments

    Hamiltonian ABC

    Full text link
    Approximate Bayesian computation (ABC) is a powerful and elegant framework for performing inference in simulation-based models. However, due to the difficulty in scaling likelihood estimates, ABC remains useful for relatively low-dimensional problems. We introduce Hamiltonian ABC (HABC), a set of likelihood-free algorithms that apply recent advances in scaling Bayesian learning using Hamiltonian Monte Carlo (HMC) and stochastic gradients. We find that a small number forward simulations can effectively approximate the ABC gradient, allowing Hamiltonian dynamics to efficiently traverse parameter spaces. We also describe a new simple yet general approach of incorporating random seeds into the state of the Markov chain, further reducing the random walk behavior of HABC. We demonstrate HABC on several typical ABC problems, and show that HABC samples comparably to regular Bayesian inference using true gradients on a high-dimensional problem from machine learning.Comment: Submission to UAI 201

    Least-biased correction of extended dynamical systems using observational data

    Full text link
    We consider dynamical systems evolving near an equilibrium statistical state where the interest is in modelling long term behavior that is consistent with thermodynamic constraints. We adjust the distribution using an entropy-optimizing formulation that can be computed on-the- fly, making possible partial corrections using incomplete information, for example measured data or data computed from a different model (or the same model at a different scale). We employ a thermostatting technique to sample the target distribution with the aim of capturing relavant statistical features while introducing mild dynamical perturbation (thermostats). The method is tested for a point vortex fluid model on the sphere, and we demonstrate both convergence of equilibrium quantities and the ability of the formulation to balance stationary and transient- regime errors.Comment: 27 page

    High-Order Stochastic Gradient Thermostats for Bayesian Learning of Deep Models

    Full text link
    Learning in deep models using Bayesian methods has generated significant attention recently. This is largely because of the feasibility of modern Bayesian methods to yield scalable learning and inference, while maintaining a measure of uncertainty in the model parameters. Stochastic gradient MCMC algorithms (SG-MCMC) are a family of diffusion-based sampling methods for large-scale Bayesian learning. In SG-MCMC, multivariate stochastic gradient thermostats (mSGNHT) augment each parameter of interest, with a momentum and a thermostat variable to maintain stationary distributions as target posterior distributions. As the number of variables in a continuous-time diffusion increases, its numerical approximation error becomes a practical bottleneck, so better use of a numerical integrator is desirable. To this end, we propose use of an efficient symmetric splitting integrator in mSGNHT, instead of the traditional Euler integrator. We demonstrate that the proposed scheme is more accurate, robust, and converges faster. These properties are demonstrated to be desirable in Bayesian deep learning. Extensive experiments on two canonical models and their deep extensions demonstrate that the proposed scheme improves general Bayesian posterior sampling, particularly for deep models.Comment: AAAI 201

    On the effect of the thermostat in non-equilibrium molecular dynamics simulations

    Full text link
    The numerical investigation of the statics and dynamics of systems in nonequilibrium in general, and under shear flow in particular, has become more and more common. However, not all the numerical methods developed to simulate equilibrium systems can be successfully adapted to out-of-equilibrium cases. This is especially true for thermostats. Indeed, even though thermostats developed to work under equilibrium conditions sometimes display good agreement with rheology experiments, their performance rapidly degrades beyond weak dissipation and small shear rates. Here we focus on gauging the relative performances of three thermostats, Langevin, dissipative particle dynamics, and Bussi-Donadio-Parrinello under varying parameters and external conditions. We compare their effectiveness by looking at different observables and clearly demonstrate that choosing the right thermostat (and its parameters) requires a careful evaluation of, at least, temperature, density and velocity profiles. We also show that small modifications of the Langevin and DPD thermostats greatly enhance their performance in a wide range of parameters.Comment: 13 pages, 9 figure
    • …
    corecore