81 research outputs found
Numerical methods for the determination of the properties and critical behaviour of percolation and the Ising model
For this thesis, numerical methods have been developed, based on Monte Carlo methods, which allow for investigating percolation and the Ising model with high precision. Emphasis is on methods to use modern parallel computers with high efficiency. Two basic approaches for parallelization were chosen: replication and domain decomposition, in conjunction with suitable algorithms. For percolation, the Hoshen-Kopelman algorithm for cluster counting was adapted to different needs. For studying fluctuations of cluster numbers, its traditional version (i. e., which is already published in literature) was used with simple replication. For simulating huge lattices, the Hoshen-Kopelman algorithm was adapted to domain decomposition, by dividing the hyperplane of investigation into strips that were assigned to different processors. By using this way of domain decomposition, it is viable to simulate huge lattices (with world record sizes) even for dimensions d greater than 2 on massively-parallel computers with distributed memory and message passing. For studying properties of percolation in dependence of system size, the Hoshen-Kopelman algorithm was modified to work on changing domains, i. e., growing lattices. By using this method, it is possible to simulate a lattice of linear size Lmax and investigate lattices of size L less than Lmax for free. Here again, replication is a viable parallelization strategy. For the Ising model, the standard Monte Carlo method of importance sampling with Glauber kinetics and multi-spin coding is adapted to parallel computers by domain decomposition of the lattice into strips. Using this parallelization method, it is possible to use massively-parallel computers with distributed memory and message passing in order to study huge lattices (again world record sizes) over many Monte Carlo steps, in order to investigate the dynamical critical behaviour in two dimensions
Parallelization of the Wolff Single-Cluster Algorithm
A parallel [open multiprocessing (OpenMP)] implementation of the Wolff single-cluster algorithm has been developed and tested for the three-dimensional (3D) Ising model. The developed procedure is generalizable to other lattice spin models and its effectiveness depends on the specific application at hand. The applicability of the developed methodology is discussed in the context of the applications, where a sophisticated shuffling scheme is used to generate pseudorandom numbers of high quality, and an iterative method is applied to find the critical temperature of the 3D Ising model with a great accuracy. For the lattice with linear size L=1024, we have reached the speedup about 1.79 times on two processors and about 2.67 times on four processors, as compared to the serial code. According to our estimation, the speedup about three times on four processors is reachable for the O(n) models with n ≥ 2. Furthermore, the application of the developed OpenMP code allows us to simulate larger lattices due to greater operative (shared) memory available
Bayesian optimization for computationally extensive probability distributions
An efficient method for finding a better maximizer of computationally
extensive probability distributions is proposed on the basis of a Bayesian
optimization technique. A key idea of the proposed method is to use extreme
values of acquisition functions by Gaussian processes for the next training
phase, which should be located near a local maximum or a global maximum of the
probability distribution. Our Bayesian optimization technique is applied to the
posterior distribution in the effective physical model estimation, which is a
computationally extensive probability distribution. Even when the number of
sampling points on the posterior distributions is fixed to be small, the
Bayesian optimization provides a better maximizer of the posterior
distributions in comparison to those by the random search method, the steepest
descent method, or the Monte Carlo method. Furthermore, the Bayesian
optimization improves the results efficiently by combining the steepest descent
method and thus it is a powerful tool to search for a better maximizer of
computationally extensive probability distributions.Comment: 13 pages, 5 figure
Re-examining the directional-ordering transition in the compass model with screw-periodic boundary conditions
We study the directional-ordering transition in the two-dimensional classical
and quantum compass models on the square lattice by means of Monte Carlo
simulations. An improved algorithm is presented which builds on the Wolff
cluster algorithm in one-dimensional subspaces of the configuration space. This
improvement allows us to study classical systems up to . Based on the
new algorithm we give evidence for the presence of strongly anomalous scaling
for periodic boundary conditions which is much worse than anticipated before.
We propose and study alternative boundary conditions for the compass model
which do not make use of extended configuration spaces and show that they
completely remove the problem with finite-size scaling. In the last part, we
apply these boundary conditions to the quantum problem and present a
considerably improved estimate for the critical temperature which should be of
interest for future studies on the compass model. Our investigation identifies
a strong one-dimensional magnetic ordering tendency with a large correlation
length as the cause of the unusual scaling and moreover allows for a precise
quantification of the anomalous length scale involved.Comment: 10 pages, 8 figures; version as publishe
Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms
We present a mathematical framework for constructing and analyzing parallel
algorithms for lattice Kinetic Monte Carlo (KMC) simulations. The resulting
algorithms have the capacity to simulate a wide range of spatio-temporal scales
in spatially distributed, non-equilibrium physiochemical processes with complex
chemistry and transport micro-mechanisms. The algorithms can be tailored to
specific hierarchical parallel architectures such as multi-core processors or
clusters of Graphical Processing Units (GPUs). The proposed parallel algorithms
are controlled-error approximations of kinetic Monte Carlo algorithms,
departing from the predominant paradigm of creating parallel KMC algorithms
with exactly the same master equation as the serial one.
Our methodology relies on a spatial decomposition of the Markov operator
underlying the KMC algorithm into a hierarchy of operators corresponding to the
processors' structure in the parallel architecture. Based on this operator
decomposition, we formulate Fractional Step Approximation schemes by employing
the Trotter Theorem and its random variants; these schemes, (a) determine the
communication schedule} between processors, and (b) are run independently on
each processor through a serial KMC simulation, called a kernel, on each
fractional step time-window.
Furthermore, the proposed mathematical framework allows us to rigorously
justify the numerical and statistical consistency of the proposed algorithms,
showing the convergence of our approximating schemes to the original serial
KMC. The approach also provides a systematic evaluation of different processor
communicating schedules.Comment: 34 pages, 9 figure
Performance potential for simulating spin models on GPU
Graphics processing units (GPUs) are recently being used to an increasing
degree for general computational purposes. This development is motivated by
their theoretical peak performance, which significantly exceeds that of broadly
available CPUs. For practical purposes, however, it is far from clear how much
of this theoretical performance can be realized in actual scientific
applications. As is discussed here for the case of studying classical spin
models of statistical mechanics by Monte Carlo simulations, only an explicit
tailoring of the involved algorithms to the specific architecture under
consideration allows to harvest the computational power of GPU systems. A
number of examples, ranging from Metropolis simulations of ferromagnetic Ising
models, over continuous Heisenberg and disordered spin-glass systems to
parallel-tempering simulations are discussed. Significant speed-ups by factors
of up to 1000 compared to serial CPU code as well as previous GPU
implementations are observed.Comment: 28 pages, 15 figures, 2 tables, version as publishe
Comparison of Different Parallel Implementations of the 2+1-Dimensional KPZ Model and the 3-Dimensional KMC Model
We show that efficient simulations of the Kardar-Parisi-Zhang interface
growth in 2 + 1 dimensions and of the 3-dimensional Kinetic Monte Carlo of
thermally activated diffusion can be realized both on GPUs and modern CPUs. In
this article we present results of different implementations on GPUs using CUDA
and OpenCL and also on CPUs using OpenCL and MPI. We investigate the runtime
and scaling behavior on different architectures to find optimal solutions for
solving current simulation problems in the field of statistical physics and
materials science.Comment: 14 pages, 8 figures, to be published in a forthcoming EPJST special
issue on "Computer simulations on GPU
Adaptive variational quantum minimally entangled typical thermal states for finite temperature simulations
Scalable quantum algorithms for the simulation of quantum many-body systems
in thermal equilibrium are important for predicting properties of quantum
matter at finite temperatures. Here we describe and benchmark a quantum
computing version of the minimally entangled typical thermal states (METTS)
algorithm for which we adopt an adaptive variational approach to perform the
required quantum imaginary time evolution. The algorithm, which we name
AVQMETTS, dynamically generates compact and problem-specific quantum circuits,
which are suitable for noisy intermediate-scale quantum (NISQ) hardware. We
benchmark AVQMETTS on statevector simulators and perform thermal energy
calculations of integrable and nonintegrable quantum spin models in one and two
dimensions and demonstrate an approximately linear system-size scaling of the
circuit complexity. We further map out the finite-temperature phase transition
line of the two-dimensional transverse field Ising model. Finally, we study the
impact of noise on AVQMETTS calculations using a phenomenological noise model.Comment: 13 pages, 6 figure
- …