47,704 research outputs found
A double main sequence turn-off in the rich star cluster NGC 1846 in the Large Magellanic Cloud
We report on HST/ACS photometry of the rich intermediate-age star cluster NGC
1846 in the Large Magellanic Cloud, which clearly reveals the presence of a
double main sequence turn-off in this object. Despite this, the main sequence,
sub-giant branch, and red giant branch are all narrow and well-defined, and the
red clump is compact. We examine the spatial distribution of turn-off stars and
demonstrate that all belong to NGC 1846 rather than to any field star
population. In addition, the spatial distributions of the two sets of turn-off
stars may exhibit different central concentrations and some asymmetries. By
fitting isochrones, we show that the properties of the colour-magnitude diagram
can be explained if there are two stellar populations of equivalent metal
abundance in NGC 1846, differing in age by approximately 300 Myr. The absolute
ages of the two populations are ~1.9 and ~2.2 Gyr, although there may be a
systematic error of up to +/-0.4 Gyr in these values. The metal abundance
inferred from isochrone fitting is [M/H] ~ -0.40, consistent with spectroscopic
measurements of [Fe/H]. We propose that the observed properties of NGC 1846 can
be explained if this object originated via the tidal capture of two star
clusters formed separately in a star cluster group in a single giant molecular
cloud. This scenario accounts naturally for the age difference and uniform
metallicity of the two member populations, as well as the differences in their
spatial distributions.Comment: 9 pages, 8 figures, accepted for publication in MNRAS. A version with
full resolution figures may be obtained at
http://www.roe.ac.uk/~dmy/papers/MN-07-0441-MJ_rv.ps.gz (postscript) or at
http://www.roe.ac.uk/~dmy/papers/MN-07-0441-MJ_rv.pdf (PDF
Algebraic and information-theoretic conditions for operator quantum error-correction
Operator quantum error-correction is a technique for robustly storing quantum
information in the presence of noise. It generalizes the standard theory of
quantum error-correction, and provides a unified framework for topics such as
quantum error-correction, decoherence-free subspaces, and noiseless subsystems.
This paper develops (a) easily applied algebraic and information-theoretic
conditions which characterize when operator quantum error-correction is
feasible; (b) a representation theorem for a class of noise processes which can
be corrected using operator quantum error-correction; and (c) generalizations
of the coherent information and quantum data processing inequality to the
setting of operator quantum error-correction.Comment: 4 page
Time Optimal Unitary Operations
Extending our previous work on time optimal quantum state evolution, we
formulate a variational principle for the time optimal unitary operation, which
has direct relevance to quantum computation. We demonstrate our method with
three examples, i.e. the swap of qubits, the quantum Fourier transform and the
entangler gate, by choosing a two-qubit anisotropic Heisenberg model.Comment: 4 pages, 1 figure. References adde
Optimality of programmable quantum measurements
We prove that for a programmable measurement device that approximates every
POVM with an error , the dimension of the program space has to grow
at least polynomially with . In the case of qubits we can
improve the general result by showing a linear growth. This proves the
optimality of the programmable measurement devices recently designed in [G. M.
D'Ariano and P. Perinotti, Phys. Rev. Lett. \textbf{94}, 090401 (2005)]
Quantum information reclaiming after amplitude damping
We investigate the quantum information reclaim from the environment after
amplitude damping has occurred. In particular we address the question of
optimal measurement on the environment to perform the best possible correction
on two and three dimensional quantum systems. Depending on the dimension we
show that the entanglement fidelity (the measure quantifying the correction
performance) is or is not the same for all possible measurements and uncover
the optimal measurement leading to the maximum entanglement fidelity
Fast partial decoherence of a superconducting flux qubit in a spin bath
The superconducting flux qubit has two quantum states with opposite magnetic
flux. Environment of nuclear spins can find out the direction of the magnetic
flux after a decoherence time inversely proportional to the magnitude
of the flux and the square root of the number of spins. When the Hamiltonian of
the qubit drives fast coherent Rabi oscillations between the states with
opposite flux, then flux direction is flipped at a constant rate and
the decoherence time is much longer than .
However, on closer inspection decoherence actually takes place on two
timescales. The long time is a time of full decoherence but a part of
quantum coherence is lost already after the short time . This fast
partial decoherence biases coherent flux oscillations towards the initial flux
direction and it can affect performance of the superconducting devices as
qubits.Comment: 7 page
Overcoming a limitation of deterministic dense coding with a non-maximally entangled initial state
Under two-party deterministic dense-coding, Alice communicates (perfectly
distinguishable) messages to Bob via a qudit from a pair of entangled qudits in
pure state |Psi>. If |Psi> represents a maximally entangled state (i.e., each
of its Schmidt coefficients is sqrt(1/d)), then Alice can convey to Bob one of
d^2 distinct messages. If |Psi> is not maximally entangled, then Ji et al.
[Phys. Rev. A 73, 034307 (2006)] have shown that under the original
deterministic dense-coding protocol, in which messages are encoded by unitary
operations performed on Alice's qudit, it is impossible to encode d^2-1
messages. Encoding d^2-2 is possible; see, e.g., the numerical studies by Mozes
et al. [Phys. Rev. A 71, 012311 (2005)]. Answering a question raised by Wu et
al. [Phys. Rev. A 73, 042311 (2006)], we show that when |Psi> is not maximally
entangled, the communications limit of d^2-2 messages persists even when the
requirement that Alice encode by unitary operations on her qudit is weakened to
allow encoding by more general quantum operators. We then describe a
dense-coding protocol that can overcome this limitation with high probability,
assuming the largest Schmidt coefficient of |Psi> is sufficiently close to
sqrt(1/d). In this protocol, d^2-2 of the messages are encoded via unitary
operations on Alice's qudit, and the final (d^2-1)-th message is encoded via a
(non-trace-preserving) quantum operation.Comment: 18 pages, published versio
Entanglement Detection Using Majorization Uncertainty Bounds
Entanglement detection criteria are developed within the framework of the
majorization formulation of uncertainty. The primary results are two theorems
asserting linear and nonlinear separability criteria based on majorization
relations, the violation of which would imply entanglement. Corollaries to
these theorems yield infinite sets of scalar entanglement detection criteria
based on quasi-entropic measures of disorder. Examples are analyzed to probe
the efficacy of the derived criteria in detecting the entanglement of bipartite
Werner states. Characteristics of the majorization relation as a comparator of
disorder uniquely suited to information-theoretical applications are emphasized
throughout.Comment: 10 pages, 1 figur
Dimension minimization of a quantum automaton
A new model of a Quantum Automaton (QA), working with qubits is proposed. The
quantum states of the automaton can be pure or mixed and are represented by
density operators. This is the appropriated approach to deal with measurements
and dechorence. The linearity of a QA and of the partial trace super-operator,
combined with the properties of invariant subspaces under unitary
transformations, are used to minimize the dimension of the automaton and,
consequently, the number of its working qubits. The results here developed are
valid wether the state set of the QA is finite or not. There are two main
results in this paper: 1) We show that the dimension reduction is possible
whenever the unitary transformations, associated to each letter of the input
alphabet, obey a set of conditions. 2) We develop an algorithm to find out the
equivalent minimal QA and prove that its complexity is polynomial in its
dimension and in the size of the input alphabet.Comment: 26 page
Classification of topologically protected gates for local stabilizer codes
Given a quantum error correcting code, an important task is to find encoded
operations that can be implemented efficiently and fault-tolerantly. In this
Letter we focus on topological stabilizer codes and encoded unitary gates that
can be implemented by a constant-depth quantum circuit. Such gates have a
certain degree of protection since propagation of errors in a constant-depth
circuit is limited by a constant size light cone. For the 2D geometry we show
that constant-depth circuits can only implement a finite group of encoded gates
known as the Clifford group. This implies that topological protection must be
"turned off" for at least some steps in the computation in order to achieve
universality. For the 3D geometry we show that an encoded gate U is
implementable by a constant-depth circuit only if the image of any Pauli
operator under conjugation by U belongs to the Clifford group. This class of
gates includes some non-Clifford gates such as the \pi/8 rotation. Our
classification applies to any stabilizer code with geometrically local
stabilizers and sufficiently large code distance.Comment: 6 pages, 2 figure
- …