52 research outputs found

    Quantum improvement of time transfer between remote clocks

    Full text link
    Exchanging light pulses to perform accurate space-time positioning is a paradigmatic issue of physics. It is ultimately limited by the quantum nature of light, which introduces fluctuations in the optical measurements and leads to the so-called Standard Quantum Limit (SQL). We propose a new scheme combining homodyne detection and mode-locked femtosecond lasers that lead to a new SQL in time transfer, potentially reaching the yoctosecond range (10^-21-10^-24 s). We prove that no other measurement strategy can lead to better sensitivity with shot noise limited light. We then demonstrate that this already very low SQL can be overcome using appropriately multimode squeezed light. Benefitting from the large number of photons used in the experiment and from the optimal choice of both the detection strategy and of the quantum resource, the proposed scheme represents a significant potential improvement in space-time positioning

    Quantum-limited position measurements of a dark matter-wave soliton

    Full text link
    We show that the position of a dark matter-wave soliton can be determined with a precision that scales with the atomic density as n−3/4n^{-3/4}. This surpasses the standard shot-noise detection limit for independent particles, without use of squeezing and entanglement, and it suggests that interactions among particles may present new advantages in high-precision metrology. We also take into account quantum density fluctuations due to phonon and Goldstone modes and we show that they, somewhat unexpectedly, actually improve the resolution. This happens because the fluctuations depend on the soliton position and make a larger amount of information available.Comment: RevTex4, 5 pages, 1 figur

    Soft clustering analysis of galaxy morphologies: A worked example with SDSS

    Full text link
    Context: The huge and still rapidly growing amount of galaxies in modern sky surveys raises the need of an automated and objective classification method. Unsupervised learning algorithms are of particular interest, since they discover classes automatically. Aims: We briefly discuss the pitfalls of oversimplified classification methods and outline an alternative approach called "clustering analysis". Methods: We categorise different classification methods according to their capabilities. Based on this categorisation, we present a probabilistic classification algorithm that automatically detects the optimal classes preferred by the data. We explore the reliability of this algorithm in systematic tests. Using a small sample of bright galaxies from the SDSS, we demonstrate the performance of this algorithm in practice. We are able to disentangle the problems of classification and parametrisation of galaxy morphologies in this case. Results: We give physical arguments that a probabilistic classification scheme is necessary. The algorithm we present produces reasonable morphological classes and object-to-class assignments without any prior assumptions. Conclusions: There are sophisticated automated classification algorithms that meet all necessary requirements, but a lot of work is still needed on the interpretation of the results.Comment: 18 pages, 19 figures, 2 tables, submitted to A

    Optimal filter approximation by means of a phase-only filter with quantization

    Get PDF
    Approximate filters based on a phase-only filter for reliable recognition of objects are proposed. Good light efficiency and discrimination capability close to that of the optimal filter can be obtained. Computer simulation results are presented and discussed

    Analysis of two-point statistics of cosmic shear: II. Optimizing the survey geometry

    Full text link
    We present simulations of a cosmic shear survey and show how the survey geometry influences the accuracy of determination of cosmological parameters. We numerically calculate the full covariance matrices Cov of two-point statistics of cosmic shear, based on the expressions derived in the first paper of this series. The individual terms are compared for two survey geometries with large and small cosmic variance. We use analyses based on maximum likelihood of Cov and the Fisher information matrix in order to derive expected constraints on cosmological parameters. As an illustrative example, we simulate various survey geometries consisting of 300 individual fields of 13'x13' size, placed (semi-)randomly into patches which are assumed to be widely separated on the sky and therefore uncorrelated. Using the aperture mass statistics, the optimum survey consists of 10 patches with 30 images in each patch. If \Omega_m, \sigma_8 and \Gamma are supposed to be extracted from the data, the minimum variance bounds on these three parameters are 0.17, 0.25 and 0.04 respectively. These variances raise slightly when the initial power spectrum index n_s is also to be determined from the data. The cosmological constant is only poorly constrained.Comment: 13 pages, 11 figures, Appeared in A&A, 2004. Typos corrected and minor changes made to match the published versio

    Degree of polarization of type-II unpolarized light

    Full text link

    Quantum light depolarization: the phase-space perspective

    Full text link
    Quantum light depolarization is handled through a master equation obtained by coupling dispersively the field to a randomly distributed atomic reservoir. This master equation is solved by transforming it into a quasiprobability distribution in phase space and the quasiclassical limit is investigated.Comment: 6 pages, no figures. Submitted for publicatio

    Measuring the dark side (with weak lensing)

    Full text link
    We introduce a convenient parametrization of dark energy models that is general enough to include several modified gravity models and generalized forms of dark energy. In particular we take into account the linear perturbation growth factor, the anisotropic stress and the modified Poisson equation. We discuss the sensitivity of large scale weak lensing surveys like the proposed DUNE satellite to these parameters. We find that a large-scale weak-lensing tomographic survey is able to easily distinguish the Dvali-Gabadadze-Porrati model from LCDM and to determine the perturbation growth index to an absolute error of 0.02-0.03.Comment: 19 pages, 11 figure
    • …
    corecore