3,665,886 research outputs found
Adaptive Quantum Homodyne Tomography
An adaptive optimization technique to improve precision of quantum homodyne
tomography is presented. The method is based on the existence of so-called null
functions, which have zero average for arbitrary state of radiation. Addition
of null functions to the tomographic kernels does not affect their mean values,
but changes statistical errors, which can then be reduced by an optimization
method that "adapts" kernels to homodyne data. Applications to tomography of
the density matrix and other relevant field-observables are studied in detail.Comment: Latex (RevTex class + psfig), 9 Figs, Submitted to PR
Recommended from our members
Understanding distributions of chess performances
This paper presents evidence for several features of the population of chess players, and the distribution of their performances measured in terms of Elo ratings and by computer analysis of moves. Evidence that ratings have remained stable since the inception of the Elo system in the 1970’s is given in several forms: by showing that the population of strong players fits a simple logistic-curve model without inflation, by plotting players’ average error against the FIDE category of tournaments over time, and by skill parameters from a model that employs computer analysis keeping a nearly constant relation to Elo rating across that time. The distribution of the model’s Intrinsic Performance Ratings can hence be used to compare populations that have limited interaction, such as between
players in a national chess federation and FIDE, and ascertain relative drift in their respective rating systems
Pairwise entanglement and readout of atomic-ensemble and optical wave-packet modes in traveling-wave Raman interactions
We analyze quantum entanglement of Stokes light and atomic electronic
polarization excited during single-pass, linear-regime, stimulated Raman
scattering in terms of optical wave-packet modes and atomic-ensemble spatial
modes. The output of this process is confirmed to be decomposable into multiple
discrete, bosonic mode pairs, each pair undergoing independent evolution into a
two-mode squeezed state. For this we extend the Bloch-Messiah reduction
theorem, previously known for discrete linear systems (S. L. Braunstein, Phys.
Rev. A, vol. 71, 055801 (2005)). We present typical mode functions in the case
of one-dimensional scattering in an atomic vapor. We find that in the absence
of dispersion, one mode pair dominates the process, leading to a simple
interpretation of entanglement in this continuous-variable system. However,
many mode pairs are excited in the presence of dispersion-induced temporal
walkoff of the Stokes, as witnessed by the photon-count statistics. We also
consider the readout of the stored atomic polarization using the anti-Stokes
scattering process. We prove that the readout process can also be decomposed
into multiple mode pairs, each pair undergoing independent evolution analogous
to a beam-splitter transformation. We show that this process can have unit
efficiency under realistic experimental conditions. The shape of the output
light wave packet can be predicted. In case of unit readout efficiency it
contains only excitations originating from a specified atomic excitation mode
Maximum-likelihood method in quantum estimation
The maximum-likelihood method for quantum estimation is reviewed and applied
to the reconstruction of density matrix of spin and radiation as well as to the
determination of several parameters of interest in quantum optics.Comment: 12 pages, 4 figure
Quantum state transfer in imperfect artificial spin networks
High-fidelity quantum computation and quantum state transfer are possible in
short spin chains. We exploit a system based on a dispersive qubit-boson
interaction to mimic XY coupling. In this model, the usually assumed
nearest-neighbors coupling is no more valid: all the qubits are mutually
coupled. We analyze the performances of our model for quantum state transfer
showing how pre-engineered coupling rates allow for nearly optimal state
transfer. We address a setup of superconducting qubits coupled to a microstrip
cavity in which our analysis may be applied.Comment: 4 pages, 3 figures, RevTeX
Hierarchical build-up of galactic bulges and the merging rate of supermassive binary black holes
The hierarchical build-up of galactic bulges should lead to the build-up of
present-day supermassive black holes by a mixture of gas accretion and merging
of supermassive black holes. The tight relation between black hole mass and
stellar velocity dispersion is thereby a strong argument that the supermassive
black holes in merging galactic bulges do indeed merge. Otherwise the ejection
of supermassive black holes by gravitational slingshot would lead to excessive
scatter in this relation. At high redshift the coalescence of massive black
hole binaries is likely to be driven by the accretion of gas in the major
mergers signposted by optically bright QSO activity. If massive black holes
only form efficiently by direct collapse of gas in deep galactic potential
wells with v_c > 100 km/s as postulated in the model of Kauffmann & Haehnelt
(2000) LISA expects to see event rates from the merging of massive binary black
holes of about 0.1-1 yr^{-1} spread over the redshift range 0 < z < 5. If,
however, the hierarchical build-up of supermassive black holes extends to
pre-galactic structures with significantly shallower potential wells event
rates may be as high as 10-100 yr^{-1} and will be dominated by events from
redshift z > 5.Comment: 8 pages, 4 postscript figures. Proceedings of the 4th International
LISA Symposium, Penn State University, 19-24 July 2002, ed. L S Fin
- …