6,029 research outputs found
One-Dimensional Directed Sandpile Models and the Area under a Brownian Curve
We derive the steady state properties of a general directed ``sandpile''
model in one dimension. Using a central limit theorem for dependent random
variables we find the precise conditions for the model to belong to the
universality class of the Totally Asymmetric Oslo model, thereby identifying a
large universality class of directed sandpiles. We map the avalanche size to
the area under a Brownian curve with an absorbing boundary at the origin,
motivating us to solve this Brownian curve problem. Thus, we are able to
determine the moment generating function for the avalanche-size probability in
this universality class, explicitly calculating amplitudes of the leading order
terms.Comment: 24 pages, 5 figure
Critical behavior of the Random-Field Ising Magnet with long range correlated disorder
We study the correlated-disorder driven zero-temperature phase transition of
the Random-Field Ising Magnet using exact numerical ground-state calculations
for cubic lattices. We consider correlations of the quenched disorder decaying
proportional to r^a, where r is the distance between two lattice sites and a<0.
To obtain exact ground states, we use a well established mapping to the
graph-theoretical maximum-flow problem, which allows us to study large system
sizes of more than two million spins. We use finite-size scaling analyses for
values a={-1,-2,-3,-7} to calculate the critical point and the critical
exponents characterizing the behavior of the specific heat, magnetization,
susceptibility and of the correlation length close to the critical point. We
find basically the same critical behavior as for the RFIM with delta-correlated
disorder, except for the finite-size exponent of the susceptibility and for the
case a=-1, where the results are also compatible with a phase transition at
infinitesimal disorder strength.
A summary of this work can be found at the papercore database at
www.papercore.org.Comment: 9 pages, 13 figure
One-dimensional infinite component vector spin glass with long-range interactions
We investigate zero and finite temperature properties of the one-dimensional
spin-glass model for vector spins in the limit of an infinite number m of spin
components where the interactions decay with a power, \sigma, of the distance.
A diluted version of this model is also studied, but found to deviate
significantly from the fully connected model. At zero temperature, defect
energies are determined from the difference in ground-state energies between
systems with periodic and antiperiodic boundary conditions to determine the
dependence of the defect-energy exponent \theta on \sigma. A good fit to this
dependence is \theta =3/4-\sigma. This implies that the upper critical value of
\sigma is 3/4, corresponding to the lower critical dimension in the
d-dimensional short-range version of the model. For finite temperatures the
large m saddle-point equations are solved self-consistently which gives access
to the correlation function, the order parameter and the spin-glass
susceptibility. Special attention is paid to the different forms of finite-size
scaling effects below and above the lower critical value, \sigma =5/8, which
corresponds to the upper critical dimension 8 of the hypercubic short-range
model.Comment: 27 pages, 27 figures, 4 table
The Generic, Incommensurate Transition in the two-dimensional Boson Hubbard Model
The generic transition in the boson Hubbard model, occurring at an
incommensurate chemical potential, is studied in the link-current
representation using the recently developed directed geometrical worm
algorithm. We find clear evidence for a multi-peak structure in the energy
distribution for finite lattices, usually indicative of a first order phase
transition. However, this multi-peak structure is shown to disappear in the
thermodynamic limit revealing that the true phase transition is second order.
These findings cast doubts over the conclusion drawn in a number of previous
works considering the relevance of disorder at this transition.Comment: 13 pages, 10 figure
Lattice QCD study of a five-quark hadronic molecule
We compute the ground-state energies of a heavy-light K-Lambda like system as
a function of the relative distance r of the hadrons. The heavy quarks, one in
each hadron, are treated as static. Then, the energies give rise to an
adiabatic potential Va(r) which we use to study the structure of the five-quark
system. The simulation is based on an anisotropic and asymmetric lattice with
Wilson fermions. Energies are extracted from spectral density functions
obtained with the maximum entropy method. Our results are meant to give
qualitative insight: Using the resulting adiabatic potential in a Schroedinger
equation produces bound state wave functions which indicate that the ground
state of the five-quark system resembles a hadronic molecule, whereas the first
excited state, having a very small rms radius, is probably better described as
a five-quark cluster, or a pentaquark. We hypothesize that an all light-quark
pentaquark may not exist, but in the heavy-quark sector it might, albeit only
as an excited state.Comment: 11 pages, 15 figures, 4 table
On the future of astrostatistics: statistical foundations and statistical practice
This paper summarizes a presentation for a panel discussion on "The Future of
Astrostatistics" held at the Statistical Challenges in Modern Astronomy V
conference at Pennsylvania State University in June 2011. I argue that the
emerging needs of astrostatistics may both motivate and benefit from
fundamental developments in statistics. I highlight some recent work within
statistics on fundamental topics relevant to astrostatistical practice,
including the Bayesian/frequentist debate (and ideas for a synthesis),
multilevel models, and multiple testing. As an important direction for future
work in statistics, I emphasize that astronomers need a statistical framework
that explicitly supports unfolding chains of discovery, with acquisition,
cataloging, and modeling of data not seen as isolated tasks, but rather as
parts of an ongoing, integrated sequence of analyses, with information and
uncertainty propagating forward and backward through the chain. A prototypical
example is surveying of astronomical populations, where source detection,
demographic modeling, and the design of survey instruments and strategies all
interact.Comment: 8 pp, 2 figures. To appear in "Statistical Challenges in Modern
Astronomy V," (Lecture Notes in Statistics, Vol. 209), ed. Eric D. Feigelson
and G. Jogesh Babu; publication planned for Sep 2012; see
http://www.springer.com/statistics/book/978-1-4614-3519-
Resampling-based confidence regions and multiple tests for a correlated random vector
We derive non-asymptotic confidence regions for the mean of a random vector
whose coordinates have an unknown dependence structure. The random vector is
supposed to be either Gaussian or to have a symmetric bounded distribution, and
we observe i.i.d copies of it. The confidence regions are built using a
data-dependent threshold based on a weighted bootstrap procedure. We consider
two approaches, the first based on a concentration approach and the second on a
direct boostrapped quantile approach. The first one allows to deal with a very
large class of resampling weights while our results for the second are
restricted to Rademacher weights. However, the second method seems more
accurate in practice. Our results are motivated by multiple testing problems,
and we show on simulations that our procedures are better than the Bonferroni
procedure (union bound) as soon as the observed vector has sufficiently
correlated coordinates.Comment: submitted to COL
Error estimation and reduction with cross correlations
Besides the well-known effect of autocorrelations in time series of Monte
Carlo simulation data resulting from the underlying Markov process, using the
same data pool for computing various estimates entails additional cross
correlations. This effect, if not properly taken into account, leads to
systematically wrong error estimates for combined quantities. Using a
straightforward recipe of data analysis employing the jackknife or similar
resampling techniques, such problems can be avoided. In addition, a covariance
analysis allows for the formulation of optimal estimators with often
significantly reduced variance as compared to more conventional averages.Comment: 16 pages, RevTEX4, 4 figures, 6 tables, published versio
Analyzing 2D gel images using a two-component empirical bayes model
<p>Abstract</p> <p>Background</p> <p>Two-dimensional polyacrylomide gel electrophoresis (2D gel, 2D PAGE, 2-DE) is a powerful tool for analyzing the proteome of a organism. Differential analysis of 2D gel images aims at finding proteins that change under different conditions, which leads to large-scale hypothesis testing as in microarray data analysis. Two-component empirical Bayes (EB) models have been widely discussed for large-scale hypothesis testing and applied in the context of genomic data. They have not been implemented for the differential analysis of 2D gel data. In the literature, the mixture and null densities of the test statistics are estimated separately. The estimation of the mixture density does not take into account assumptions about the null density. Thus, there is no guarantee that the estimated null component will be no greater than the mixture density as it should be.</p> <p>Results</p> <p>We present an implementation of a two-component EB model for the analysis of 2D gel images. In contrast to the published estimation method, we propose to estimate the mixture and null densities simultaneously using a constrained estimation approach, which relies on an iteratively re-weighted least-squares algorithm. The assumption about the null density is naturally taken into account in the estimation of the mixture density. This strategy is illustrated using a set of 2D gel images from a factorial experiment. The proposed approach is validated using a set of simulated gels.</p> <p>Conclusions</p> <p>The two-component EB model is a very useful for large-scale hypothesis testing. In proteomic analysis, the theoretical null density is often not appropriate. We demonstrate how to implement a two-component EB model for analyzing a set of 2D gel images. We show that it is necessary to estimate the mixture density and empirical null component simultaneously. The proposed constrained estimation method always yields valid estimates and more stable results. The proposed estimation approach proposed can be applied to other contexts where large-scale hypothesis testing occurs.</p
Entanglement of Trapped-Ion Clock States
A M{\o}lmer-S{\o}rensen entangling gate is realized for pairs of trapped
Cd ions using magnetic-field insensitive "clock" states and an
implementation offering reduced sensitivity to optical phase drifts. The gate
is used to generate the complete set of four entangled states, which are
reconstructed and evaluated with quantum-state tomography. An average
target-state fidelity of 0.79 is achieved, limited by available laser power and
technical noise. The tomographic reconstruction of entangled states
demonstrates universal quantum control of two ion-qubits, which through
multiplexing can provide a route to scalable architectures for trapped-ion
quantum computing.Comment: 6 pages, 5 figure
- …