329 research outputs found

    Sampling functions for multimode homodyne tomography with a single local oscillator

    Full text link
    We derive various sampling functions for multimode homodyne tomography with a single local oscillator. These functions allow us to sample multimode s-parametrized quasidistributions, density matrix elements in Fock basis, and s-ordered moments of arbitrary order directly from the measured quadrature statistics. The inevitable experimental losses can be compensated by proper modification of the sampling functions. Results of Monte Carlo simulations for squeezed three-mode state are reported and the feasibility of reconstruction of the three-mode Q-function and s-ordered moments from 10^7 sampled data is demonstrated.Comment: 12 pages, 8 figures, REVTeX, submitted Phys. Rev.

    Conditional large Fock state preparation and field state reconstruction in Cavity QED

    Get PDF
    We propose a scheme for producing large Fock states in Cavity QED via the implementation of a highly selective atom-field interaction. It is based on Raman excitation of a three-level atom by a classical field and a quantized field mode. Selectivity appears when one tunes to resonance a specific transition inside a chosen atom-field subspace, while other transitions remain dispersive, as a consequence of the field dependent electronic energy shifts. We show that this scheme can be also employed for reconstructing, in a new and efficient way, the Wigner function of the cavity field state.Comment: 4 Revtex pages with 3 postscript figures. Submitted for publicatio

    Quantum inference of states and processes

    Get PDF
    The maximum-likelihood principle unifies inference of quantum states and processes from experimental noisy data. Particularly, a generic quantum process may be estimated simultaneously with unknown quantum probe states provided that measurements on probe and transformed probe states are available. Drawbacks of various approximate treatments are considered.Comment: 7 pages, 4 figure

    Phase-space formulation of quantum mechanics and quantum state reconstruction for physical systems with Lie-group symmetries

    Get PDF
    We present a detailed discussion of a general theory of phase-space distributions, introduced recently by the authors [J. Phys. A {\bf 31}, L9 (1998)]. This theory provides a unified phase-space formulation of quantum mechanics for physical systems possessing Lie-group symmetries. The concept of generalized coherent states and the method of harmonic analysis are used to construct explicitly a family of phase-space functions which are postulated to satisfy the Stratonovich-Weyl correspondence with a generalized traciality condition. The symbol calculus for the phase-space functions is given by means of the generalized twisted product. The phase-space formalism is used to study the problem of the reconstruction of quantum states. In particular, we consider the reconstruction method based on measurements of displaced projectors, which comprises a number of recently proposed quantum-optical schemes and is also related to the standard methods of signal processing. A general group-theoretic description of this method is developed using the technique of harmonic expansions on the phase space.Comment: REVTeX, 18 pages, no figure

    Euclid Preparation TBD. Characterization of convolutional neural networks for the identification of galaxy-galaxy strong lensing events

    Get PDF
    Forthcoming imaging surveys will increase the number of known galaxy-scale strong lenses by several orders of magnitude. For this to happen, images of billions of galaxies will have to be inspected to identify potential candidates. In this context, deep-learning techniques are particularly suitable for finding patterns in large data sets, and convolutional neural networks (CNNs) in particular can efficiently process large volumes of images. We assess and compare the performance of three network architectures in the classification of strong-lensing systems on the basis of their morphological characteristics. In particular, we implemented a classical CNN architecture, an inception network, and a residual network. We trained and tested our networks on different subsamples of a data set of 40 000 mock images whose characteristics were similar to those expected in the wide survey planned with the ESA mission Euclid, gradually including larger fractions of faint lenses. We also evaluated the importance of adding information about the color difference between the lens and source galaxies by repeating the same training on single- and multiband images. Our models find samples of clear lenses with ≳90% precision and completeness. Nevertheless, when lenses with fainter arcs are included in the training set, the performance of the three models deteriorates with accuracy values of ~0.87 to ~0.75, depending on the model. Specifically, the classical CNN and the inception network perform similarly in most of our tests, while the residual network generally produces worse results. Our analysis focuses on the application of CNNs to high-resolution space-like images, such as those that the Euclid telescope will deliver. Moreover, we investigated the optimal training strategy for this specific survey to fully exploit the scientific potential of the upcoming observations. We suggest that training the networks separately on lenses with different morphology might be needed to identify the faint arcs. We also tested the relevance of the color information for the detection of these systems, and we find that it does not yield a significant improvement. The accuracy ranges from ~0.89 to ~0.78 for the different models. The reason might be that the resolution of the Euclid telescope in the infrared bands is lower than that of the images in the visual band

    Euclid preparation. XXIV. Calibration of the halo mass function in Λ(Îœ)CDM cosmologies

    Get PDF
    Euclid’s photometric galaxy cluster survey has the potential to be a very competitive cosmological probe. The main cosmological probe with observations of clusters is their number count, within which the halo mass function (HMF) is a key theoretical quantity. We present a new calibration of the analytic HMF, at the level of accuracy and precision required for the uncertainty in this quantity to be subdominant with respect to other sources of uncertainty in recovering cosmological parameters from Euclid cluster counts. Our model is calibrated against a suite of N-body simulations using a Bayesian approach taking into account systematic errors arising from numerical effects in the simulation. First, we test the convergence of HMF predictions from different N-body codes, by using initial conditions generated with different orders of Lagrangian Perturbation theory, and adopting different simulation box sizes and mass resolution. Then, we quantify the effect of using different halo finder algorithms, and how the resulting differences propagate to the cosmological constraints. In order to trace the violation of universality in the HMF, we also analyse simulations based on initial conditions characterised by scale-free power spectra with different spectral indexes, assuming both Einstein–de Sitter and standard ΛCDM expansion histories. Based on these results, we construct a fitting function for the HMF that we demonstrate to be sub-percent accurate in reproducing results from 9 different variants of the ΛCDM model including massive neutrinos cosmologies. The calibration systematic uncertainty is largely sub-dominant with respect to the expected precision of future mass–observation relations; with the only notable exception of the effect due to the halo finder, that could lead to biased cosmological inference

    Quantum state reconstruction using atom optics

    Get PDF
    We present a novel technique in which the total internal quantum state of an atom may be reconstructed via the measurement of the momentum transferred to an atom following its interaction with a near resonant travelling wave laser beam. We present the first such measurement and demonstrate the feasibility of the technique

    Euclid: modelling massive neutrinos in cosmology - a code comparison

    Get PDF
    Material outgassing in a vacuum leads to molecular contamination, a well-known problem in spaceflight. Water is the most common contaminant in cryogenic spacecraft, altering numerous properties of optical systems. Too much ice means that Euclid’s calibration requirements cannot be met anymore. Euclid must then be thermally decontaminated, which is a month-long risky operation. We need to understand how ice affects our data to build adequate calibration and survey plans. A comprehensive analysis in the context of an astrophysical space survey has not been done before. In this paper we look at other spacecraft with well-documented outgassing records. We then review the formation of thin ice films, and find that for Euclid a mix of amorphous and crystalline ices is expected. Their surface topography – and thus optical properties – depend on the competing energetic needs of the substrate-water and the water-water interfaces, and they are hard to predict with current theories. We illustrate that with scanning-tunnelling and atomic-force microscope images of thin ice films. Sophisticated tools exist to compute contamination rates, and we must understand their underlying physical principles and uncertainties. We find considerable knowledge errors on the diffusion and sublimation coefficients, limiting the accuracy of outgassing estimates. We developed a water transport model to compute contamination rates in Euclid, and find agreement with industry estimates within the uncertainties. Tests of the Euclid flight hardware in space simulators did not pick up significant contamination signals, but they were also not geared towards this purpose; our in-flight calibration observations will be much more sensitive. To derive a calibration and decontamination strategy, we need to understand the link between the amount of ice in the optics and its effect on the data. There is little research about this, possibly because other spacecraft can decontaminate more easily, quenching the need for a deeper understanding. In our second paper, we quantify the impact of iced optics on Euclid’s data

    Euclid preparation:XXIII. Derivation of galaxy physical properties with deep machine learning using mock fluxes and H-band images

    Get PDF
    Next generation telescopes, such as Euclid, Rubin/LSST, and Roman, will open new windows on the Universe, allowing us to infer physical properties for tens of millions of galaxies. Machine learning methods are increasingly becoming the most efficient tools to handle this enormous amount of data, not only as they are faster to apply to data samples than traditional methods, but because they are also often more accurate. Properly understanding their applications and limitations for the exploitation of these data is of utmost importance. In this paper we present an exploration of this topic by investigating how well redshifts, stellar masses, and star-formation rates can be measured with deep learning algorithms for galaxies within data that mimics the Euclid and Rubin/LSST surveys. We find that Deep Learning Neural Networks and Convolutional Neutral Networks (CNN), which are dependent on the parameter space of the sample used for training, perform well in measuring the properties of these galaxies and have an accuracy which is better than traditional methods based on spectral energy distribution fitting. CNNs allow the processing of multi-band magnitudes together with HEH_{E}-band images. We find that the estimates of stellar masses improve with the use of an image, but those of redshift and star-formation rates do not. Our best machine learning results are deriving i) the redshift within a normalised error of less than 0.15 for 99.9% of the galaxies in the sample with S/N>3 in the HEH_{E}-band; ii) the stellar mass within a factor of two (∌\sim0.3 dex) for 99.5% of the considered galaxies; iii) the star-formation rates within a factor of two (∌\sim0.3 dex) for ∌\sim70% of the sample. We discuss the implications of our work for application to surveys, mainly but not limited to Euclid and Rubin/LSST, and how measurements of these galaxy parameters can be improved with deep learning

    Euclid Preparation. TBD. Impact of magnification on spectroscopic galaxy clustering

    Get PDF
    In this paper we investigate the impact of lensing magnification on the analysis of Euclid's spectroscopic survey, using the multipoles of the 2-point correlation function for galaxy clustering. We determine the impact of lensing magnification on cosmological constraints, and the expected shift in the best-fit parameters if magnification is ignored. We consider two cosmological analyses: i) a full-shape analysis based on the Λ\LambdaCDM model and its extension w0waw_0w_aCDM and ii) a model-independent analysis that measures the growth rate of structure in each redshift bin. We adopt two complementary approaches in our forecast: the Fisher matrix formalism and the Markov chain Monte Carlo method. The fiducial values of the local count slope (or magnification bias), which regulates the amplitude of the lensing magnification, have been estimated from the Euclid Flagship simulations. We use linear perturbation theory and model the 2-point correlation function with the public code coffe. For a Λ\LambdaCDM model, we find that the estimation of cosmological parameters is biased at the level of 0.4-0.7 standard deviations, while for a w0waw_0w_aCDM dynamical dark energy model, lensing magnification has a somewhat smaller impact, with shifts below 0.5 standard deviations. In a model-independent analysis aiming to measure the growth rate of structure, we find that the estimation of the growth rate is biased by up to 1.21.2 standard deviations in the highest redshift bin. As a result, lensing magnification cannot be neglected in the spectroscopic survey, especially if we want to determine the growth factor, one of the most promising ways to test general relativity with Euclid. We also find that, by including lensing magnification with a simple template, this shift can be almost entirely eliminated with minimal computational overhead
    • 

    corecore