285 research outputs found

    Sampling functions for multimode homodyne tomography with a single local oscillator

    Full text link
    We derive various sampling functions for multimode homodyne tomography with a single local oscillator. These functions allow us to sample multimode s-parametrized quasidistributions, density matrix elements in Fock basis, and s-ordered moments of arbitrary order directly from the measured quadrature statistics. The inevitable experimental losses can be compensated by proper modification of the sampling functions. Results of Monte Carlo simulations for squeezed three-mode state are reported and the feasibility of reconstruction of the three-mode Q-function and s-ordered moments from 10^7 sampled data is demonstrated.Comment: 12 pages, 8 figures, REVTeX, submitted Phys. Rev.

    Conditional large Fock state preparation and field state reconstruction in Cavity QED

    Get PDF
    We propose a scheme for producing large Fock states in Cavity QED via the implementation of a highly selective atom-field interaction. It is based on Raman excitation of a three-level atom by a classical field and a quantized field mode. Selectivity appears when one tunes to resonance a specific transition inside a chosen atom-field subspace, while other transitions remain dispersive, as a consequence of the field dependent electronic energy shifts. We show that this scheme can be also employed for reconstructing, in a new and efficient way, the Wigner function of the cavity field state.Comment: 4 Revtex pages with 3 postscript figures. Submitted for publicatio

    Quantum inference of states and processes

    Get PDF
    The maximum-likelihood principle unifies inference of quantum states and processes from experimental noisy data. Particularly, a generic quantum process may be estimated simultaneously with unknown quantum probe states provided that measurements on probe and transformed probe states are available. Drawbacks of various approximate treatments are considered.Comment: 7 pages, 4 figure

    Phase-space formulation of quantum mechanics and quantum state reconstruction for physical systems with Lie-group symmetries

    Get PDF
    We present a detailed discussion of a general theory of phase-space distributions, introduced recently by the authors [J. Phys. A {\bf 31}, L9 (1998)]. This theory provides a unified phase-space formulation of quantum mechanics for physical systems possessing Lie-group symmetries. The concept of generalized coherent states and the method of harmonic analysis are used to construct explicitly a family of phase-space functions which are postulated to satisfy the Stratonovich-Weyl correspondence with a generalized traciality condition. The symbol calculus for the phase-space functions is given by means of the generalized twisted product. The phase-space formalism is used to study the problem of the reconstruction of quantum states. In particular, we consider the reconstruction method based on measurements of displaced projectors, which comprises a number of recently proposed quantum-optical schemes and is also related to the standard methods of signal processing. A general group-theoretic description of this method is developed using the technique of harmonic expansions on the phase space.Comment: REVTeX, 18 pages, no figure

    Quantum state reconstruction using atom optics

    Get PDF
    We present a novel technique in which the total internal quantum state of an atom may be reconstructed via the measurement of the momentum transferred to an atom following its interaction with a near resonant travelling wave laser beam. We present the first such measurement and demonstrate the feasibility of the technique

    Euclid preparation:XXIII. Derivation of galaxy physical properties with deep machine learning using mock fluxes and H-band images

    Get PDF
    Next generation telescopes, such as Euclid, Rubin/LSST, and Roman, will open new windows on the Universe, allowing us to infer physical properties for tens of millions of galaxies. Machine learning methods are increasingly becoming the most efficient tools to handle this enormous amount of data, not only as they are faster to apply to data samples than traditional methods, but because they are also often more accurate. Properly understanding their applications and limitations for the exploitation of these data is of utmost importance. In this paper we present an exploration of this topic by investigating how well redshifts, stellar masses, and star-formation rates can be measured with deep learning algorithms for galaxies within data that mimics the Euclid and Rubin/LSST surveys. We find that Deep Learning Neural Networks and Convolutional Neutral Networks (CNN), which are dependent on the parameter space of the sample used for training, perform well in measuring the properties of these galaxies and have an accuracy which is better than traditional methods based on spectral energy distribution fitting. CNNs allow the processing of multi-band magnitudes together with HEH_{E}-band images. We find that the estimates of stellar masses improve with the use of an image, but those of redshift and star-formation rates do not. Our best machine learning results are deriving i) the redshift within a normalised error of less than 0.15 for 99.9% of the galaxies in the sample with S/N>3 in the HEH_{E}-band; ii) the stellar mass within a factor of two (\sim0.3 dex) for 99.5% of the considered galaxies; iii) the star-formation rates within a factor of two (\sim0.3 dex) for \sim70% of the sample. We discuss the implications of our work for application to surveys, mainly but not limited to Euclid and Rubin/LSST, and how measurements of these galaxy parameters can be improved with deep learning

    Euclid Preparation. TBD. Impact of magnification on spectroscopic galaxy clustering

    Get PDF
    In this paper we investigate the impact of lensing magnification on the analysis of Euclid's spectroscopic survey, using the multipoles of the 2-point correlation function for galaxy clustering. We determine the impact of lensing magnification on cosmological constraints, and the expected shift in the best-fit parameters if magnification is ignored. We consider two cosmological analyses: i) a full-shape analysis based on the Λ\LambdaCDM model and its extension w0waw_0w_aCDM and ii) a model-independent analysis that measures the growth rate of structure in each redshift bin. We adopt two complementary approaches in our forecast: the Fisher matrix formalism and the Markov chain Monte Carlo method. The fiducial values of the local count slope (or magnification bias), which regulates the amplitude of the lensing magnification, have been estimated from the Euclid Flagship simulations. We use linear perturbation theory and model the 2-point correlation function with the public code coffe. For a Λ\LambdaCDM model, we find that the estimation of cosmological parameters is biased at the level of 0.4-0.7 standard deviations, while for a w0waw_0w_aCDM dynamical dark energy model, lensing magnification has a somewhat smaller impact, with shifts below 0.5 standard deviations. In a model-independent analysis aiming to measure the growth rate of structure, we find that the estimation of the growth rate is biased by up to 1.21.2 standard deviations in the highest redshift bin. As a result, lensing magnification cannot be neglected in the spectroscopic survey, especially if we want to determine the growth factor, one of the most promising ways to test general relativity with Euclid. We also find that, by including lensing magnification with a simple template, this shift can be almost entirely eliminated with minimal computational overhead

    Euclid preparation TBD. The effect of baryons on the Halo Mass Function

    Get PDF
    The Euclid photometric survey of galaxy clusters stands as a powerful cosmological tool, with the capacity to significantly propel our understanding of the Universe. Despite being sub-dominant to dark matter and dark energy, the baryonic component in our Universe holds substantial influence over the structure and mass of galaxy clusters. This paper presents a novel model to precisely quantify the impact of baryons on galaxy cluster virial halo masses, using the baryon fraction within a cluster as proxy for their effect. Constructed on the premise of quasi-adiabaticity, the model includes two parameters calibrated using non-radiative cosmological hydrodynamical simulations and a single large-scale simulation from the Magneticum set, which includes the physical processes driving galaxy formation. As a main result of our analysis, we demonstrate that this model delivers a remarkable one percent relative accuracy in determining the virial dark matter-only equivalent mass of galaxy clusters, starting from the corresponding total cluster mass and baryon fraction measured in hydrodynamical simulations. Furthermore, we demonstrate that this result is robust against changes in cosmological parameters and against varying the numerical implementation of the sub-resolution physical processes included in the simulations. Our work substantiates previous claims about the impact of baryons on cluster cosmology studies. In particular, we show how neglecting these effects would lead to biased cosmological constraints for a Euclid-like cluster abundance analysis. Importantly, we demonstrate that uncertainties associated with our model, arising from baryonic corrections to cluster masses, are sub-dominant when compared to the precision with which mass-observable relations will be calibrated using Euclid, as well as our current understanding of the baryon fraction within galaxy clusters

    Euclid preparation. XXXI. The effect of the variations in photometric passbands on photometric-redshift accuracy

    Get PDF
    The technique of photometric redshifts has become essential for the exploitation of multi-band extragalactic surveys. While the requirements on photo-zs for the study of galaxy evolution mostly pertain to the precision and to the fraction of outliers, the most stringent requirement in their use in cosmology is on the accuracy, with a level of bias at the sub-percent level for the Euclid cosmology mission. A separate, and challenging, calibration process is needed to control the bias at this level of accuracy. The bias in photo-zs has several distinct origins that may not always be easily overcome. We identify here one source of bias linked to the spatial or time variability of the passbands used to determine the photometric colours of galaxies. We first quantified the effect as observed on several well-known photometric cameras, and found in particular that, due to the properties of optical filters, the redshifts of off-axis sources are usually overestimated. We show using simple simulations that the detailed and complex changes in the shape can be mostly ignored and that it is sufficient to know the mean wavelength of the passbands of each photometric observation to correct almost exactly for this bias; the key point is that this mean wavelength is independent of the spectral energy distribution of the source}. We use this property to propose a correction that can be computationally efficiently implemented in some photo-z algorithms, in particular template-fitting. We verified that our algorithm, implemented in the new photo-z code Phosphoros, can effectively reduce the bias in photo-zs on real data using the CFHTLS T007 survey, with an average measured bias Delta z over the redshift range 0.

    Euclid preparation. XXIX. Water ice in spacecraft part I:The physics of ice formation and contamination

    Get PDF
    Molecular contamination is a well-known problem in space flight. Water is the most common contaminant and alters numerous properties of a cryogenic optical system. Too much ice means that Euclid's calibration requirements and science goals cannot be met. Euclid must then be thermally decontaminated, a long and risky process. We need to understand how iced optics affect the data and when a decontamination is required. This is essential to build adequate calibration and survey plans, yet a comprehensive analysis in the context of an astrophysical space survey has not been done before. In this paper we look at other spacecraft with well-documented outgassing records, and we review the formation of thin ice films. A mix of amorphous and crystalline ices is expected for Euclid. Their surface topography depends on the competing energetic needs of the substrate-water and the water-water interfaces, and is hard to predict with current theories. We illustrate that with scanning-tunnelling and atomic-force microscope images. Industrial tools exist to estimate contamination, and we must understand their uncertainties. We find considerable knowledge errors on the diffusion and sublimation coefficients, limiting the accuracy of these tools. We developed a water transport model to compute contamination rates in Euclid, and find general agreement with industry estimates. Tests of the Euclid flight hardware in space simulators did not pick up contamination signals; our in-flight calibrations observations will be much more sensitive. We must understand the link between the amount of ice on the optics and its effect on Euclid's data. Little research is available about this link, possibly because other spacecraft can decontaminate easily, quenching the need for a deeper understanding. In our second paper we quantify the various effects of iced optics on spectrophotometric data
    corecore