36,639 research outputs found
Probabilistic Computability and Choice
We study the computational power of randomized computations on infinite
objects, such as real numbers. In particular, we introduce the concept of a Las
Vegas computable multi-valued function, which is a function that can be
computed on a probabilistic Turing machine that receives a random binary
sequence as auxiliary input. The machine can take advantage of this random
sequence, but it always has to produce a correct result or to stop the
computation after finite time if the random advice is not successful. With
positive probability the random advice has to be successful. We characterize
the class of Las Vegas computable functions in the Weihrauch lattice with the
help of probabilistic choice principles and Weak Weak K\H{o}nig's Lemma. Among
other things we prove an Independent Choice Theorem that implies that Las Vegas
computable functions are closed under composition. In a case study we show that
Nash equilibria are Las Vegas computable, while zeros of continuous functions
with sign changes cannot be computed on Las Vegas machines. However, we show
that the latter problem admits randomized algorithms with weaker failure
recognition mechanisms. The last mentioned results can be interpreted such that
the Intermediate Value Theorem is reducible to the jump of Weak Weak
K\H{o}nig's Lemma, but not to Weak Weak K\H{o}nig's Lemma itself. These
examples also demonstrate that Las Vegas computable functions form a proper
superclass of the class of computable functions and a proper subclass of the
class of non-deterministically computable functions. We also study the impact
of specific lower bounds on the success probabilities, which leads to a strict
hierarchy of classes. In particular, the classical technique of probability
amplification fails for computations on infinite objects. We also investigate
the dependency on the underlying probability space.Comment: Information and Computation (accepted for publication
Phase Structure of the Random-Plaquette Z_2 Gauge Model: Accuracy Threshold for a Toric Quantum Memory
We study the phase structure of the random-plaquette Z_2 lattice gauge model
in three dimensions. In this model, the "gauge coupling" for each plaquette is
a quenched random variable that takes the value \beta with the probability 1-p
and -\beta with the probability p. This model is relevant for the recently
proposed quantum memory of toric code. The parameter p is the concentration of
the plaquettes with "wrong-sign" couplings -\beta, and interpreted as the error
probability per qubit in quantum code. In the gauge system with p=0, i.e., with
the uniform gauge couplings \beta, it is known that there exists a second-order
phase transition at a certain critical "temperature", T(\equiv \beta^{-1}) =
T_c =1.31, which separates an ordered(Higgs) phase at T<T_c and a
disordered(confinement) phase at T>T_c. As p increases, the critical
temperature T_c(p) decreases. In the p-T plane, the curve T_c(p) intersects
with the Nishimori line T_{N}(p) at the certain point (p_c, T_{N}(p_c)). The
value p_c is just the accuracy threshold for a fault-tolerant quantum memory
and associated quantum computations. By the Monte-Carlo simulations, we
calculate the specific heat and the expectation values of the Wilson loop to
obtain the phase-transition line T_c(p) numerically. The accuracy threshold is
estimated as p_c \simeq 0.033.Comment: 24 pages, 14 figures, some clarification
A highly optimized vectorized code for Monte Carlo simulations of SU(3) lattice gauge theories
New methods are introduced for improving the performance of the vectorized Monte Carlo SU(3) lattice gauge theory algorithm using the CDC CYBER 205. Structure, algorithm and programming considerations are discussed. The performance achieved for a 16(4) lattice on a 2-pipe system may be phrased in terms of the link update time or overall MFLOPS rates. For 32-bit arithmetic, it is 36.3 microsecond/link for 8 hits per iteration (40.9 microsecond for 10 hits) or 101.5 MFLOPS
Regge gravity on general triangulations
We investigate quantum gravity in four dimensions using the Regge approach on
triangulations of the four-torus with general, non-regular incidence matrices.
We find that the simplicial lattice tends to develop spikes for vertices with
low coordination numbers even for vanishing gravitational coupling. Different
to the regular, hypercubic lattices almost exclusively used in previous
studies, we find now that the observables depend on the measure. Computations
with nonvanishing gravitational coupling still reveal the existence of a region
with well-defined expectation values. However, the phase structure depends on
the triangulation. Even with additional higher- order terms in the action the
critical behavior of the system changes with varying (local) coordination
numbers.Comment: uuencoded postscript file, 16 page
An Algorithmic Approach to Quantum Field Theory
The lattice formulation provides a way to regularize, define and compute the
Path Integral in a Quantum Field Theory. In this paper we review the
theoretical foundations and the most basic algorithms required to implement a
typical lattice computation, including the Metropolis, the Gibbs sampling, the
Minimal Residual, and the Stabilized Biconjugate inverters. The main emphasis
is on gauge theories with fermions such as QCD. We also provide examples of
typical results from lattice QCD computations for quantities of
phenomenological interest.Comment: 44 pages, to be published in IJMP
SU(2) Lattice Gauge Theory Simulations on Fermi GPUs
In this work we explore the performance of CUDA in quenched lattice SU(2)
simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware
and software architecture developed by NVIDIA for computing on the GPU. We
present an analysis and performance comparison between the GPU and CPU in
single and double precision. Analyses with multiple GPUs and two different
architectures (G200 and Fermi architectures) are also presented. In order to
obtain a high performance, the code must be optimized for the GPU architecture,
i.e., an implementation that exploits the memory hierarchy of the CUDA
programming model.
We produce codes for the Monte Carlo generation of SU(2) lattice gauge
configurations, for the mean plaquette, for the Polyakov Loop at finite T and
for the Wilson loop. We also present results for the potential using many
configurations () without smearing and almost configurations
with APE smearing. With two Fermi GPUs we have achieved an excellent
performance of the speed over one CPU, in single precision, around
110 Gflops/s. We also find that, using the Fermi architecture, double precision
computations for the static quark-antiquark potential are not much slower (less
than slower) than single precision computations.Comment: 20 pages, 11 figures, 3 tables, accepted in Journal of Computational
Physic
Efficient generation and optimization of stochastic template banks by a neighboring cell algorithm
Placing signal templates (grid points) as efficiently as possible to cover a
multi-dimensional parameter space is crucial in computing-intensive
matched-filtering searches for gravitational waves, but also in similar
searches in other fields of astronomy. To generate efficient coverings of
arbitrary parameter spaces, stochastic template banks have been advocated,
where templates are placed at random while rejecting those too close to others.
However, in this simple scheme, for each new random point its distance to every
template in the existing bank is computed. This rapidly increasing number of
distance computations can render the acceptance of new templates
computationally prohibitive, particularly for wide parameter spaces or in large
dimensions. This work presents a neighboring cell algorithm that can
dramatically improve the efficiency of constructing a stochastic template bank.
By dividing the parameter space into sub-volumes (cells), for an arbitrary
point an efficient hashing technique is exploited to obtain the index of its
enclosing cell along with the parameters of its neighboring templates. Hence
only distances to these neighboring templates in the bank are computed,
massively lowering the overall computing cost, as demonstrated in simple
examples. Furthermore, we propose a novel method based on this technique to
increase the fraction of covered parameter space solely by directed template
shifts, without adding any templates. As is demonstrated in examples, this
method can be highly effective..Comment: PRD accepte
- …