53,771 research outputs found

    Boosting Monte Carlo simulations of spin glasses using autoregressive neural networks

    Full text link
    The autoregressive neural networks are emerging as a powerful computational tool to solve relevant problems in classical and quantum mechanics. One of their appealing functionalities is that, after they have learned a probability distribution from a dataset, they allow exact and efficient sampling of typical system configurations. Here we employ a neural autoregressive distribution estimator (NADE) to boost Markov chain Monte Carlo (MCMC) simulations of a paradigmatic classical model of spin-glass theory, namely the two-dimensional Edwards-Anderson Hamiltonian. We show that a NADE can be trained to accurately mimic the Boltzmann distribution using unsupervised learning from system configurations generated using standard MCMC algorithms. The trained NADE is then employed as smart proposal distribution for the Metropolis-Hastings algorithm. This allows us to perform efficient MCMC simulations, which provide unbiased results even if the expectation value corresponding to the probability distribution learned by the NADE is not exact. Notably, we implement a sequential tempering procedure, whereby a NADE trained at a higher temperature is iteratively employed as proposal distribution in a MCMC simulation run at a slightly lower temperature. This allows one to efficiently simulate the spin-glass model even in the low-temperature regime, avoiding the divergent correlation times that plague MCMC simulations driven by local-update algorithms. Furthermore, we show that the NADE-driven simulations quickly sample ground-state configurations, paving the way to their future utilization to tackle binary optimization problems.Comment: 13 pages, 14 figure

    Incremental and Decremental Maintenance of Planar Width

    Full text link
    We present an algorithm for maintaining the width of a planar point set dynamically, as points are inserted or deleted. Our algorithm takes time O(kn^epsilon) per update, where k is the amount of change the update causes in the convex hull, n is the number of points in the set, and epsilon is any arbitrarily small constant. For incremental or decremental update sequences, the amortized time per update is O(n^epsilon).Comment: 7 pages; 2 figures. A preliminary version of this paper was presented at the 10th ACM/SIAM Symp. Discrete Algorithms (SODA '99); this is the journal version, and will appear in J. Algorithm

    Sequential Bayesian inference for implicit hidden Markov models and current limitations

    Full text link
    Hidden Markov models can describe time series arising in various fields of science, by treating the data as noisy measurements of an arbitrarily complex Markov process. Sequential Monte Carlo (SMC) methods have become standard tools to estimate the hidden Markov process given the observations and a fixed parameter value. We review some of the recent developments allowing the inclusion of parameter uncertainty as well as model uncertainty. The shortcomings of the currently available methodology are emphasised from an algorithmic complexity perspective. The statistical objects of interest for time series analysis are illustrated on a toy "Lotka-Volterra" model used in population ecology. Some open challenges are discussed regarding the scalability of the reviewed methodology to longer time series, higher-dimensional state spaces and more flexible models.Comment: Review article written for ESAIM: proceedings and surveys. 25 pages, 10 figure
    corecore