231 research outputs found
Energy-time entanglement, Elements of Reality, and Local Realism
The Franson interferometer, proposed in 1989 [J. D. Franson, Phys. Rev. Lett.
62:2205-2208 (1989)], beautifully shows the counter-intuitive nature of light.
The quantum description predicts sinusoidal interference for specific outcomes
of the experiment, and these predictions can be verified in experiment. In the
spirit of Einstein, Podolsky, and Rosen it is possible to ask if the
quantum-mechanical description (of this setup) can be considered complete. This
question will be answered in detail in this paper, by delineating the quite
complicated relation between energy-time entanglement experiments and
Einstein-Podolsky-Rosen (EPR) elements of reality. The mentioned sinusoidal
interference pattern is the same as that giving a violation in the usual Bell
experiment. Even so, depending on the precise requirements made on the local
realist model, this can imply a) no violation, b) smaller violation than usual,
or c) full violation of the appropriate statistical bound. Alternatives include
a) using only the measurement outcomes as EPR elements of reality, b) using the
emission time as EPR element of reality, c) using path realism, or d) using a
modified setup. This paper discusses the nature of these alternatives and how
to choose between them. The subtleties of this discussion needs to be taken
into account when designing and setting up experiments intended to test local
realism. Furthermore, these considerations are also important for quantum
communication, for example in Bell-inequality-based quantum cryptography,
especially when aiming for device independence.Comment: 18 pages, 7 figures, v2 rewritten and extende
Necessary and sufficient detection efficiency for the Mermin inequalities
We prove that the threshold detection efficiency for a loophole-free Bell
experiment using an -qubit Greenberger-Horne-Zeilinger state and the
correlations appearing in the -partite Mermin inequality is . If
the detection efficiency is equal to or lower than this value, there are local
hidden variable models that can simulate all the quantum predictions. If the
detection efficiency is above this value, there is no local hidden variable
model that can simulate all the quantum predictions.Comment: REVTeX4, 5 pages, 1 figur
Minimum detection efficiency for a loophole-free atom-photon Bell experiment
In Bell experiments, one problem is to achieve high enough photodetection to
ensure that there is no possibility of describing the results via a local
hidden-variable model. Using the Clauser-Horne inequality and a two-photon
non-maximally entangled state, a photodetection efficiency higher than 0.67 is
necessary. Here we discuss atom-photon Bell experiments. We show that, assuming
perfect detection efficiency of the atom, it is possible to perform a
loophole-free atom-photon Bell experiment whenever the photodetection
efficiency exceeds 0.50.Comment: REVTeX4, 4 pages, 1 figur
Optimal measurement bases for Bell-tests based on the CH-inequality
The Hardy test of nonlocality can be seen as a particular case of the Bell
tests based on the Clauser-Horne (CH) inequality. Here we stress this
connection when we analyze the relation between the CH-inequality violation,
its threshold detection efficiency, and the measurement settings adopted in the
test. It is well known that the threshold efficiencies decrease when one
considers partially entangled states and that the use of these states,
unfortunately, generates a reduction in the CH violation. Nevertheless, these
quantities are both dependent on the measurement settings considered, and in
this paper we show that there are measurement bases which allow for an optimal
situation in this trade-off relation. These bases are given as a generalization
of the Hardy measurement bases, and they will be relevant for future Bell tests
relying on pairs of entangled qubits.Comment: 8 pages, 6 figure
Qubits from Number States and Bell Inequalities for Number Measurements
Bell inequalities for number measurements are derived via the observation
that the bits of the number indexing a number state are proper qubits.
Violations of these inequalities are obtained from the output state of the
nondegenerate optical parametric amplifier.Comment: revtex4, 7 pages, v2: results identical but extended presentation,
v3: published versio
O18O and C18O observations of rho Oph A
Observations of the (N_J=1_1-1_0) ground state transition of O_2 with the
Odin satellite resulted in a about 5 sigma detection toward the dense core rho
Oph A. At the frequency of the line, 119 GHz, the Odin telescope has a beam
width of 10', larger than the size of the dense core, so that the precise
nature of the emitting source and its exact location and extent are unknown.
The current investigation is intended to remedy this. Telluric absorption makes
ground based O_2 observations essentially impossible and observations had to be
done from space. mm-wave telescopes on space platforms were necessarily small,
which resulted in large, several arcminutes wide, beam patterns. Although the
Earth's atmosphere is entirely opaque to low-lying O_2 transitions, it allows
ground based observations of the much rarer O18O in favourable conditions and
at much higher angular resolution with larger telescopes. In addition, rho Oph
A exhibits both multiple radial velocity systems and considerable velocity
gradients. Extensive mapping of the region in the proxy C18O (J=3-2) line can
be expected to help identify the O_2 source on the basis of its line shape and
Doppler velocity. Line opacities were determined from observations of optically
thin 13C18O (J=3-2) at selected positions. During several observing periods,
two C18O intensity maxima in rho Oph A were searched for in the 16O18O
(2_1-0_1) line at 234 GHz with the 12m APEX telescope. Our observations
resulted in an upper limit on the integrated O18O intensity of < 0.01 K km/s (3
sigma) into the 26.5" beam. We conclude that the source of observed O_2
emission is most likely confined to the central regions of the rho Oph A cloud.
In this limited area, implied O_2 abundances could thus be higher than
previously reported, by up to two orders of magnitude.Comment: 7 pages, 6 figures (5 colour), Astronomy & Astrophysic
Device-independent quantum key distribution secure against collective attacks
Device-independent quantum key distribution (DIQKD) represents a relaxation
of the security assumptions made in usual quantum key distribution (QKD). As in
usual QKD, the security of DIQKD follows from the laws of quantum physics, but
contrary to usual QKD, it does not rely on any assumptions about the internal
working of the quantum devices used in the protocol. We present here in detail
the security proof for a DIQKD protocol introduced in [Phys. Rev. Lett. 98,
230501 (2008)]. This proof exploits the full structure of quantum theory (as
opposed to other proofs that exploit the no-signalling principle only), but
only holds again collective attacks, where the eavesdropper is assumed to act
on the quantum systems of the honest parties independently and identically at
each round of the protocol (although she can act coherently on her systems at
any time). The security of any DIQKD protocol necessarily relies on the
violation of a Bell inequality. We discuss the issue of loopholes in Bell
experiments in this context.Comment: 25 pages, 3 figure
- …