40,739 research outputs found
Constraining the variation of fundamental constants at z ~ 1.3 using 21-cm absorbers
We present high resolution optical spectra obtained with the Ultraviolet and
Visual Echelle Spectrograph (UVES) at the Very Large Telescope (VLT) and 21-cm
absorption spectra obtained with the Giant Metrewave Radio Telescope (GMRT) and
the Green Bank Telescope (GBT) of five quasars along the line of sight of which
21-cm absorption systems at 1.17 < z < 1.56 have been detected previously. We
also present milliarcsec scale radio images of these quasars obtained with the
Very Large Baseline Array (VLBA). We use the data on four of these systems to
constrain the time variation of x = g_p*alpha^2/mu where g_p is the proton
gyromagnetic factor, alpha is the fine structure constant, and mu is the
proton-to-electron mass ratio. We carefully evaluate the systematic
uncertainties in redshift measurements using cross-correlation analysis and
repeated Voigt profile fitting. In two cases we also confirm our results by
analysing optical spectra obtained with the Keck telescope. We find the
weighted and the simple means of Delta_x / x to be respectively -(0.1 +/-
1.3)x10^-6 and (0.0 +/- 1.5)x10^-6 at the mean redshift of = 1.36
corresponding to a look back time of ~ 9 Gyr. This is the most stringent
constraint ever obtained on Delta_x / x. If we only use the two systems towards
quasars unresolved at milliarcsec scales, we get the simple mean of Delta_x / x
= + (0.2 +/- 1.6)x10^-6. Assuming constancy of other constants we get
Delta_alpha / alpha = (0.0 +/- 0.8)x10^-6 which is a factor of two better than
the best constraints obtained so far using the Many Multiplet Method. On the
other hand assuming alpha and g_p have not varied we derive Delta_mmu / mu =
(0.0 +/- 1.5)x10^-6 which is again the best limit ever obtained on the
variation of mu over this redshift range. [Abridged]Comment: 22 pages, 15 figures, Accepted for publication in MNRA
Scalable simultaneous multi-qubit readout with 99.99% single-shot fidelity
We describe single-shot readout of a trapped-ion multi-qubit register using
space and time-resolved camera detection. For a single qubit we measure
0.9(3)x10^{-4} readout error in 400us exposure time, limited by the qubit's
decay lifetime. For a four-qubit register (a "qunybble") we measure an
additional error of only 0.1(1)x10^{-4} per qubit, despite the presence of 4%
optical cross-talk between neighbouring qubits. A study of the cross-talk
indicates that the method would scale with negligible loss of fidelity to
~10000 qubits at a density <~1 qubit/um^2, with a readout time ~1us/qubit.Comment: 4 pages, 3 figures; simulations added to fig.3, with some further
text and figure revisions. Main results unchanged
Tensor Microwave Anisotropies from a Stochastic Magnetic Field
We derive an expression for the angular power spectrum of cosmic microwave
background anisotropies due to gravity waves generated by a stochastic magnetic
field and compare the result with current observations; we take into account
the non-linear nature of the stress energy tensor of the magnetic field.
For almost scale invariant spectra, the amplitude of the magnetic field at
galactic scales is constrained to be of order 10^{-9} Gauss. If we assume that
the magnetic field is damped below the Alfven damping scale, we find that its
amplitude at
0.1 h^{-1}Mpc, B_\lambda, is constrained to be B_\lambda<7.9 x10^{-6} e^{3n}
Gauss, for n-3/2, where
n is the spectral index of the magnetic field and H_0=100h km s^{-1}Mpc^{-1} is
the Hubble constant today.Comment: 6 pages, 1 figure, accepted for publication in Phys. Rev.
Why do we need higher order fully exclusive Monte Carlo generator for Higgs boson production from heavy quark fusion at LHC?
In this paper we argue that having available higher order fully exclusive
Monte Carlo generator for Higgs boson production from heavy quark fusion will
be mandatory for data analysis at LHC. The H to tau tau channel, a key for
early discovery of the Higgs boson in the MSSM scenario, is discussed. With
simplified example and for mH = 120 GeV we show, that depending on choice among
presently available approaches, used for simulation of Higgs boson production
from b bbar H Yukawa coupling, final acceptance for the signal events being
reconstructed inside mass window may differ by a factor of 3. The spread is
even larger (up to a factor of 10) for other production mechanisms (promising
for some regions of the MSSM parameter space). The complete analysis, which
necessarily will add stringent requirements for background rejection (such as
identification of b-jet or veto on b-jet) and which will require statistical
combination of samples selected with different selection criteria may only
enhance the uncertainty.Comment: 14 pages, 22 figure
GLB: Lifeline-based Global Load Balancing library in X10
We present GLB, a programming model and an associated implementation that can
handle a wide range of irregular paral- lel programming problems running over
large-scale distributed systems. GLB is applicable both to problems that are
easily load-balanced via static scheduling and to problems that are hard to
statically load balance. GLB hides the intricate syn- chronizations (e.g.,
inter-node communication, initialization and startup, load balancing,
termination and result collection) from the users. GLB internally uses a
version of the lifeline graph based work-stealing algorithm proposed by
Saraswat et al. Users of GLB are simply required to write several pieces of
sequential code that comply with the GLB interface. GLB then schedules and
orchestrates the parallel execution of the code correctly and efficiently at
scale. We have applied GLB to two representative benchmarks: Betweenness
Centrality (BC) and Unbalanced Tree Search (UTS). Among them, BC can be
statically load-balanced whereas UTS cannot. In either case, GLB scales well--
achieving nearly linear speedup on different computer architectures (Power,
Blue Gene/Q, and K) -- up to 16K cores
An Evaluation of the X10 Programming Language
As predicted by Moore\u27s law, the number of transistors on a chip has been doubled approximately every two years. As miraculous as it sounds, for many years, the extra transistors have massively benefited the whole computer industry, by using the extra transistors to increase CPU clock speed, thus boosting performance. However, due to heat wall and power constraints, the clock speed cannot be increased limitlessly. Hardware vendors now have to take another path other than increasing clock speed, which is to utilize the transistors to increase the number of processor cores on each chip. This hardware structural change presents inevitable challenges to software structure, where single thread targeted software will not benefit from newer chips or may even suffer from lower clock speed. The two fundamental challenges are: 1. How to deal with the stagnation of single core clock speed and cache memory. 2. How to utilize the additional power generated from more cores on a chip. Most software programming languages nowadays have distributed computing support, such as C and Java [1]. Meanwhile, some new programming languages were invented from scratch just to take advantage of the more distributed hardware structures. The X10 Programming Language is one of them. The goal of this project is to evaluate X10 in terms of performance, programmability and tool support
- …