6,025 research outputs found
Exact Algorithm for Sampling the 2D Ising Spin Glass
A sampling algorithm is presented that generates spin glass configurations of
the 2D Edwards-Anderson Ising spin glass at finite temperature, with
probabilities proportional to their Boltzmann weights. Such an algorithm
overcomes the slow dynamics of direct simulation and can be used to study
long-range correlation functions and coarse-grained dynamics. The algorithm
uses a correspondence between spin configurations on a regular lattice and
dimer (edge) coverings of a related graph: Wilson's algorithm [D. B. Wilson,
Proc. 8th Symp. Discrete Algorithms 258, (1997)] for sampling dimer coverings
on a planar lattice is adapted to generate samplings for the dimer problem
corresponding to both planar and toroidal spin glass samples. This algorithm is
recursive: it computes probabilities for spins along a "separator" that divides
the sample in half. Given the spins on the separator, sample configurations for
the two separated halves are generated by further division and assignment. The
algorithm is simplified by using Pfaffian elimination, rather than Gaussian
elimination, for sampling dimer configurations. For n spins and given floating
point precision, the algorithm has an asymptotic run-time of O(n^{3/2}); it is
found that the required precision scales as inverse temperature and grows only
slowly with system size. Sample applications and benchmarking results are
presented for samples of size up to n=128^2, with fixed and periodic boundary
conditions.Comment: 18 pages, 10 figures, 1 table; minor clarification
Sampling algorithms for validation of supervised learning models for Ising-like systems
In this paper, we build and explore supervised learning models of
ferromagnetic system behavior, using Monte-Carlo sampling of the spin
configuration space generated by the 2D Ising model. Given the enormous size of
the space of all possible Ising model realizations, the question arises as to
how to choose a reasonable number of samples that will form physically
meaningful and non-intersecting training and testing datasets. Here, we propose
a sampling technique called ID-MH that uses the Metropolis-Hastings algorithm
creating Markov process across energy levels within the predefined
configuration subspace. We show that application of this method retains phase
transitions in both training and testing datasets and serves the purpose of
validation of a machine learning algorithm. For larger lattice dimensions,
ID-MH is not feasible as it requires knowledge of the complete configuration
space. As such, we develop a new "block-ID" sampling strategy: it decomposes
the given structure into square blocks with lattice dimension no greater than 5
and uses ID-MH sampling of candidate blocks. Further comparison of the
performance of commonly used machine learning methods such as random forests,
decision trees, k nearest neighbors and artificial neural networks shows that
the PCA-based Decision Tree regressor is the most accurate predictor of
magnetizations of the Ising model. For energies, however, the accuracy of
prediction is not satisfactory, highlighting the need to consider more
algorithmically complex methods (e.g., deep learning).Comment: 43 pages and 16 figure
Sampling the ground-state magnetization of d-dimensional p-body Ising models
We demonstrate that a recently introduced heuristic optimization algorithm
[Phys. Rev. E 83, 046709 (2011)] that combines a local search with triadic
crossover genetic updates is capable of sampling nearly uniformly among
ground-state configurations in spin-glass-like Hamiltonians with p-spin
interactions in d space dimensions that have highly degenerate ground states.
Using this algorithm we probe the zero-temperature ferromagnet to spin-glass
transition point q_c of two example models, the disordered version of the
two-dimensional three-spin Baxter-Wu model [q_c = 0.1072(1)] and the
three-dimensional Edwards-Anderson model [q_c = 0.2253(7)], by computing the
Binder ratio of the ground-state magnetization.Comment: 8 pages, 6 figures, 3 table
Deep neural networks for direct, featureless learning through observation: the case of 2d spin models
We demonstrate the capability of a convolutional deep neural network in
predicting the nearest-neighbor energy of the 4x4 Ising model. Using its
success at this task, we motivate the study of the larger 8x8 Ising model,
showing that the deep neural network can learn the nearest-neighbor Ising
Hamiltonian after only seeing a vanishingly small fraction of configuration
space. Additionally, we show that the neural network has learned both the
energy and magnetization operators with sufficient accuracy to replicate the
low-temperature Ising phase transition. We then demonstrate the ability of the
neural network to learn other spin models, teaching the convolutional deep
neural network to accurately predict the long-range interaction of a screened
Coulomb Hamiltonian, a sinusoidally attenuated screened Coulomb Hamiltonian,
and a modified Potts model Hamiltonian. In the case of the long-range
interaction, we demonstrate the ability of the neural network to recover the
phase transition with equivalent accuracy to the numerically exact method.
Furthermore, in the case of the long-range interaction, the benefits of the
neural network become apparent; it is able to make predictions with a high
degree of accuracy, and do so 1600 times faster than a CUDA-optimized exact
calculation. Additionally, we demonstrate how the neural network succeeds at
these tasks by looking at the weights learned in a simplified demonstration
Adaptive cluster expansion for the inverse Ising problem: convergence, algorithm and tests
We present a procedure to solve the inverse Ising problem, that is to find
the interactions between a set of binary variables from the measure of their
equilibrium correlations. The method consists in constructing and selecting
specific clusters of variables, based on their contributions to the
cross-entropy of the Ising model. Small contributions are discarded to avoid
overfitting and to make the computation tractable. The properties of the
cluster expansion and its performances on synthetic data are studied. To make
the implementation easier we give the pseudo-code of the algorithm.Comment: Paper submitted to Journal of Statistical Physic
- …