3,574 research outputs found
Boosting Monte Carlo simulations of spin glasses using autoregressive neural networks
The autoregressive neural networks are emerging as a powerful computational
tool to solve relevant problems in classical and quantum mechanics. One of
their appealing functionalities is that, after they have learned a probability
distribution from a dataset, they allow exact and efficient sampling of typical
system configurations. Here we employ a neural autoregressive distribution
estimator (NADE) to boost Markov chain Monte Carlo (MCMC) simulations of a
paradigmatic classical model of spin-glass theory, namely the two-dimensional
Edwards-Anderson Hamiltonian. We show that a NADE can be trained to accurately
mimic the Boltzmann distribution using unsupervised learning from system
configurations generated using standard MCMC algorithms. The trained NADE is
then employed as smart proposal distribution for the Metropolis-Hastings
algorithm. This allows us to perform efficient MCMC simulations, which provide
unbiased results even if the expectation value corresponding to the probability
distribution learned by the NADE is not exact. Notably, we implement a
sequential tempering procedure, whereby a NADE trained at a higher temperature
is iteratively employed as proposal distribution in a MCMC simulation run at a
slightly lower temperature. This allows one to efficiently simulate the
spin-glass model even in the low-temperature regime, avoiding the divergent
correlation times that plague MCMC simulations driven by local-update
algorithms. Furthermore, we show that the NADE-driven simulations quickly
sample ground-state configurations, paving the way to their future utilization
to tackle binary optimization problems.Comment: 13 pages, 14 figure
Generalized Simulated Annealing
We propose a new stochastic algorithm (generalized simulated annealing) for
computationally finding the global minimum of a given (not necessarily convex)
energy/cost function defined in a continuous D-dimensional space. This
algorithm recovers, as particular cases, the so called classical ("Boltzmann
machine") and fast ("Cauchy machine") simulated annealings, and can be quicker
than both. Key-words: simulated annealing; nonconvex optimization; gradient
descent; generalized statistical mechanics.Comment: 13 pages, latex, 4 figures available upon request with the authors
Evolving stochastic learning algorithm based on Tsallis entropic index
In this paper, inspired from our previous algorithm, which was based on the theory of Tsallis statistical mechanics, we develop a new evolving stochastic learning algorithm for neural networks. The new algorithm combines deterministic and stochastic search steps by employing a different adaptive stepsize for each network weight, and applies a form of noise that is characterized by the nonextensive entropic index q, regulated by a weight decay term. The behavior of the learning algorithm can be made more stochastic or deterministic depending on the trade off between the temperature T and the q values. This is achieved by introducing a formula that defines a time-dependent relationship between these two important learning parameters. Our experimental study verifies that there are indeed improvements in the convergence speed of this new evolving stochastic learning algorithm, which makes learning faster than using the original Hybrid Learning Scheme (HLS). In addition, experiments are conducted to explore the influence of the entropic index q and temperature T on the convergence speed and stability of the proposed method
Recommended from our members
Versatile stochastic dot product circuits based on nonvolatile memories for high performance neurocomputing and neurooptimization.
The key operation in stochastic neural networks, which have become the state-of-the-art approach for solving problems in machine learning, information theory, and statistics, is a stochastic dot-product. While there have been many demonstrations of dot-product circuits and, separately, of stochastic neurons, the efficient hardware implementation combining both functionalities is still missing. Here we report compact, fast, energy-efficient, and scalable stochastic dot-product circuits based on either passively integrated metal-oxide memristors or embedded floating-gate memories. The circuit's high performance is due to mixed-signal implementation, while the efficient stochastic operation is achieved by utilizing circuit's noise, intrinsic and/or extrinsic to the memory cell array. The dynamic scaling of weights, enabled by analog memory devices, allows for efficient realization of different annealing approaches to improve functionality. The proposed approach is experimentally verified for two representative applications, namely by implementing neural network for solving a four-node graph-partitioning problem, and a Boltzmann machine with 10-input and 8-hidden neurons
- …