58,705 research outputs found

    Near-optimal protocols in complex nonequilibrium transformations

    Get PDF
    The development of sophisticated experimental means to control nanoscale systems has motivated efforts to design driving protocols which minimize the energy dissipated to the environment. Computational models are a crucial tool in this practical challenge. We describe a general method for sampling an ensemble of finite-time, nonequilibrium protocols biased towards a low average dissipation. We show that this scheme can be carried out very efficiently in several limiting cases. As an application, we sample the ensemble of low-dissipation protocols that invert the magnetization of a 2D Ising model and explore how the diversity of the protocols varies in response to constraints on the average dissipation. In this example, we find that there is a large set of protocols with average dissipation close to the optimal value, which we argue is a general phenomenon.Comment: 6 pages and 3 figures plus 4 pages and 5 figures of supplemental materia

    Reweighting for Nonequilibrium Markov Processes Using Sequential Importance Sampling Methods

    Full text link
    We present a generic reweighting method for nonequilibrium Markov processes. With nonequilibrium Monte Carlo simulations at a single temperature, one calculates the time evolution of physical quantities at different temperatures, which greatly saves the computational time. Using the dynamical finite-size scaling analysis for the nonequilibrium relaxation, one can study the dynamical properties of phase transitions together with the equilibrium ones. We demonstrate the procedure for the Ising model with the Metropolis algorithm, but the present formalism is general and can be applied to a variety of systems as well as with different Monte Carlo update schemes.Comment: accepted for publication in Phys. Rev. E (Rapid Communications

    Boosting Monte Carlo simulations of spin glasses using autoregressive neural networks

    Full text link
    The autoregressive neural networks are emerging as a powerful computational tool to solve relevant problems in classical and quantum mechanics. One of their appealing functionalities is that, after they have learned a probability distribution from a dataset, they allow exact and efficient sampling of typical system configurations. Here we employ a neural autoregressive distribution estimator (NADE) to boost Markov chain Monte Carlo (MCMC) simulations of a paradigmatic classical model of spin-glass theory, namely the two-dimensional Edwards-Anderson Hamiltonian. We show that a NADE can be trained to accurately mimic the Boltzmann distribution using unsupervised learning from system configurations generated using standard MCMC algorithms. The trained NADE is then employed as smart proposal distribution for the Metropolis-Hastings algorithm. This allows us to perform efficient MCMC simulations, which provide unbiased results even if the expectation value corresponding to the probability distribution learned by the NADE is not exact. Notably, we implement a sequential tempering procedure, whereby a NADE trained at a higher temperature is iteratively employed as proposal distribution in a MCMC simulation run at a slightly lower temperature. This allows one to efficiently simulate the spin-glass model even in the low-temperature regime, avoiding the divergent correlation times that plague MCMC simulations driven by local-update algorithms. Furthermore, we show that the NADE-driven simulations quickly sample ground-state configurations, paving the way to their future utilization to tackle binary optimization problems.Comment: 13 pages, 14 figure

    Hamiltonian Monte Carlo Without Detailed Balance

    Full text link
    We present a method for performing Hamiltonian Monte Carlo that largely eliminates sample rejection for typical hyperparameters. In situations that would normally lead to rejection, instead a longer trajectory is computed until a new state is reached that can be accepted. This is achieved using Markov chain transitions that satisfy the fixed point equation, but do not satisfy detailed balance. The resulting algorithm significantly suppresses the random walk behavior and wasted function evaluations that are typically the consequence of update rejection. We demonstrate a greater than factor of two improvement in mixing time on three test problems. We release the source code as Python and MATLAB packages.Comment: Accepted conference submission to ICML 2014 and also featured in a special edition of JMLR. Since updated to include additional literature citation
    corecore