820 research outputs found
Evidence for hard chiral logarithms in quenched lattice QCD
We present the first direct evidence that quenched QCD differs from full QCD
in the chiral () limit, as predicted by chiral perturbation
theory, from our quenched lattice QCD simulations at . We
measured the spectrum of light hadrons on ,
and , using staggered quarks of masses ,
and . The pion masses showed clear evidence for logarithmic
violations of the PCAC relation , as predicted by
quenched chiral perturbation theory. The dependence on spatial lattice volume
precludes this being a finite size effect. No evidence was seen for such chiral
logarithms in the behaviour of the chiral condensate
.Comment: 10 pages, 4 figures, uuencoded compressed postscript fil
Monte Carlo tomographic reconstruction in SPECT impact of bootstrapping and number of generated events
In Single Photon Emission Computed Tomography (SPECT), 3D images usually
reconstructed by performing a set of bidimensional (2D) analytical or iterative
reconstructions can also be reconstructed using an iterative reconstruction
algorithm involving a 3D projector. Accurate Monte Carlo (MC) simulations
modeling all the physical effects that affect the imaging process can be used
to estimate this projector. However, the accuracy of the projector is affected
by the stochastic nature of MC simulations. In this paper, we study the
accuracy of the reconstructed images with respect to the number of simulated
histories used to estimate the MC projector. Furthermore, we study the impact
of applying the bootstrapping technique when estimating the projectorComment: 15 pages, 9 figures, 2 table
Effects of spatial size, lattice doubling and source operator on the hadron spectrum with dynamical staggered quarks
We have extended our previous study of the lattice QCD spectrum with 2
flavors of staggered dynamical quarks at and and 0.01
to larger lattices, with better statistics and with additional sources for the
propagators. The additional sources allowed us to estimate the mass
and to measure the masses of all mesons whose operators are local in time.
These mesons show good evidence for flavor symmetry restoration, except for the
masses of the Goldstone and non-Goldstone pions. PCAC is observed in that
, and is estimated. Use of undoubled lattices
removes problems with the pion propagator found in our earlier work. Previously
we found a large change in the nucleon mass at a quark mass of when
we increased the spatial size from 12 to 16. No such effect is observed at the
larger quark mass, . Two kinds of wall source were used, and we
have found difficulties in getting consistent results for the nucleon mass
between the two sources.Comment: 30 pages PostScript fil
Hadron Spectrum in QCD with Valence Wilson Fermions and Dynamical Staggered Fermions at $6/g^2=5.6
We present an analysis of hadronic spectroscopy for Wilson valence quarks
with dynamical staggered fermions at lattice coupling at
sea quark mass and 0.025, and of Wilson valence quarks in quenched
approximation at and 5.95, both on lattices. We
make comparisons with our previous results with dynamical staggered fermions at
the same parameter values but on lattices doubled in the temporal
direction.Comment: 32 page
A Study of the Nambu--Jona-Lasinio Model on the Lattice
We present our full analysis of the two flavor Nambu--Jona-Lasinio model with
chiral symmetry on the four--dimensional hypercubic
lattice with naive and Wilson fermions. We find that this model is an excellent
toy field theory to investigate issues related to lattice QCD. We use the large
approximation to leading order in to obtain non perturbative
analytical results over almost the whole parameter range. By using numerical
simulations we estimate that the size of the corrections for most of the
quantities we consider are small and in this way we strengthen the validity of
the leading order large calculations. We obtain results regarding the
approach to the continuum chiral limit, the effects of the zero momentum
fermionic modes on finite lattices and the scalar and pseudoscalar spectrum.
Note: The full ps file of this preprint is also available via anonymous ftp to
ftp.scri.fsu.edu. To get the ps file, ftp to this address and use for username
"anonymous" and for password your complete E-mail address. The file is in the
directory pub/vranas (to go to that directory type: cd pub/vranas) and is
called NJL_long.ps (to get it type: get NJL_long.ps)Comment: 35 pages, LaTex file. (Added section with title: "The zero pion mass
line on a finite lattice at large ".
QCD thermodynamics with two flavors of Wilson quarks at N_t=6
We report on a study of hadron thermodynamics with two flavors of Wilson
quarks on 12^3x6 lattices. We have studied the crossover between the high and
low temperature regimes for three values of the hopping parameter, kappa=0.16,
0.17, and 0.18. At each of these values of kappa we have carried out spectrum
calculations on 12^3x24 lattices for two values of the gauge coupling in the
vicinity of the crossover in order to set an energy scale for our
thermodynamics calculations and to determine the critical value of the gauge
coupling for which the pion and quark masses vanish. For kappa=0.17 and 0.18 we
find coexistence between the high and low temperature regimes over 1,000
simulation time units indicating either that the equilibration time is
extremely long or that there is a possibility of a first order phase
transition. The pion mass is large at the crossover values of the gauge
coupling, but the crossover curve has moved closer to the critical curve along
which the pion and quark masses vanish, than it was on lattices with four time
slices. In addition, values of the dimensionless quantity T_c/m_rho are in
closer agreement with those for staggered quarks than was the case at N_t=4. (A
POSTSCRIPT VERSION OF THIS PAPER IS AVAILABLE BY ANONYMOUS FTP FROM
sarek.physics.ucsb.edu (128.111.8.250) IN THE FILE pub/wilson_thermo.ps)Comment: 24 page
Effect of noise and modeling errors on the reliability of fully 3D Monte Carlo reconstruction in SPECT
We recently demonstrated the value of reconstructing SPECT data with fully 3D
Monte Carlo reconstruction (F3DMC), in terms of spatial resolution and
quantification. This was shown on a small cubic phantom (64 projections 10 x
10) in some idealistic configurations. The goals of the present study were to
assess the effect of noise and modeling errors on the reliability of F3DMC, to
propose and evaluate strategies for reducing the noise in the projector, and to
demonstrate the feasibility of F3DMC for a dataset with realistic dimensions. A
small cubic phantom and a realistic Jaszczak phantom dataset were considered.
Projections and projectors for both phantoms were calculated using the Monte
Carlo simulation code GATE. Projectors with different statistics were
considered and two methods for reducing noise in the projector were
investigated: one based on principal component analysis (PCA) and the other
consisting in setting small probability values to zero. Energy and spatial
shifts in projection sampling with respect to projector sampling were also
introduced to test F3DMC in realistic conditions. Experiments with the cubic
phantom showed the importance of using simulations with high statistics for
calculating the projector, and the value of filtering the projector using a PCA
approach. F3DMC was shown to be robust with respect to energy shift and small
spatial sampling off-set between the projector and the projections. Images of
the Jaszczak phantom were successfully reconstructed and also showed promising
results in terms of spatial resolution recovery and quantitative accuracy in
small structures. It is concluded that the promising results of F3DMC hold on
realistic data set
Efficient distributed machine learning via combinatorial multi-armed bandits
We consider the distributed stochastic gradient descent problem, where a main node distributes gradient calculations among n workers from which at most b ≤ n can be utilized in parallel. By assigning tasks to all the workers and waiting only for the k fastest ones, the main node can trade-off the error of the algorithm with its runtime by gradually increasing k as the algorithm evolves. However, this strategy, referred to as adaptive k-sync, can incur additional costs since it ignores the computational efforts of slow workers. We propose a cost-efficient scheme that assigns tasks only to k workers and gradually increases k. As the response times of the available workers are unknown to the main node a priori, we utilize a combinatorial multi-armed bandit model to learn which workers are the fastest while assigning gradient calculations, and to minimize the effect of slow workers. Assuming that the mean response times of the workers are independent and exponentially distributed with different means, we give empirical and theoretical guarantees on the regret of our strategy, i.e., the extra time spent to learn the mean response times of the workers. Compared to adaptive k-sync, our scheme achieves significantly lower errors with the same computational efforts while being inferior in terms of speed
- …