340 research outputs found

    A framework to identify knowledge actor roles in enterprise social networks

    Get PDF
    Enterprise social networks (ESN) are increasingly used by companies to reinforce collaboration and knowledge sharing. While prior research has investigated ESN use practices, little is known about potential user roles emerging on these platforms. Against this backdrop, this paper develops an ESN knowledge actor role framework

    Sampling functions for multimode homodyne tomography with a single local oscillator

    Full text link
    We derive various sampling functions for multimode homodyne tomography with a single local oscillator. These functions allow us to sample multimode s-parametrized quasidistributions, density matrix elements in Fock basis, and s-ordered moments of arbitrary order directly from the measured quadrature statistics. The inevitable experimental losses can be compensated by proper modification of the sampling functions. Results of Monte Carlo simulations for squeezed three-mode state are reported and the feasibility of reconstruction of the three-mode Q-function and s-ordered moments from 10^7 sampled data is demonstrated.Comment: 12 pages, 8 figures, REVTeX, submitted Phys. Rev.

    Conditional large Fock state preparation and field state reconstruction in Cavity QED

    Get PDF
    We propose a scheme for producing large Fock states in Cavity QED via the implementation of a highly selective atom-field interaction. It is based on Raman excitation of a three-level atom by a classical field and a quantized field mode. Selectivity appears when one tunes to resonance a specific transition inside a chosen atom-field subspace, while other transitions remain dispersive, as a consequence of the field dependent electronic energy shifts. We show that this scheme can be also employed for reconstructing, in a new and efficient way, the Wigner function of the cavity field state.Comment: 4 Revtex pages with 3 postscript figures. Submitted for publicatio

    Euclid Preparation TBD. Characterization of convolutional neural networks for the identification of galaxy-galaxy strong lensing events

    Get PDF
    Forthcoming imaging surveys will increase the number of known galaxy-scale strong lenses by several orders of magnitude. For this to happen, images of billions of galaxies will have to be inspected to identify potential candidates. In this context, deep-learning techniques are particularly suitable for finding patterns in large data sets, and convolutional neural networks (CNNs) in particular can efficiently process large volumes of images. We assess and compare the performance of three network architectures in the classification of strong-lensing systems on the basis of their morphological characteristics. In particular, we implemented a classical CNN architecture, an inception network, and a residual network. We trained and tested our networks on different subsamples of a data set of 40 000 mock images whose characteristics were similar to those expected in the wide survey planned with the ESA mission Euclid, gradually including larger fractions of faint lenses. We also evaluated the importance of adding information about the color difference between the lens and source galaxies by repeating the same training on single- and multiband images. Our models find samples of clear lenses with ≳90% precision and completeness. Nevertheless, when lenses with fainter arcs are included in the training set, the performance of the three models deteriorates with accuracy values of ~0.87 to ~0.75, depending on the model. Specifically, the classical CNN and the inception network perform similarly in most of our tests, while the residual network generally produces worse results. Our analysis focuses on the application of CNNs to high-resolution space-like images, such as those that the Euclid telescope will deliver. Moreover, we investigated the optimal training strategy for this specific survey to fully exploit the scientific potential of the upcoming observations. We suggest that training the networks separately on lenses with different morphology might be needed to identify the faint arcs. We also tested the relevance of the color information for the detection of these systems, and we find that it does not yield a significant improvement. The accuracy ranges from ~0.89 to ~0.78 for the different models. The reason might be that the resolution of the Euclid telescope in the infrared bands is lower than that of the images in the visual band

    Phase-space formulation of quantum mechanics and quantum state reconstruction for physical systems with Lie-group symmetries

    Get PDF
    We present a detailed discussion of a general theory of phase-space distributions, introduced recently by the authors [J. Phys. A {\bf 31}, L9 (1998)]. This theory provides a unified phase-space formulation of quantum mechanics for physical systems possessing Lie-group symmetries. The concept of generalized coherent states and the method of harmonic analysis are used to construct explicitly a family of phase-space functions which are postulated to satisfy the Stratonovich-Weyl correspondence with a generalized traciality condition. The symbol calculus for the phase-space functions is given by means of the generalized twisted product. The phase-space formalism is used to study the problem of the reconstruction of quantum states. In particular, we consider the reconstruction method based on measurements of displaced projectors, which comprises a number of recently proposed quantum-optical schemes and is also related to the standard methods of signal processing. A general group-theoretic description of this method is developed using the technique of harmonic expansions on the phase space.Comment: REVTeX, 18 pages, no figure

    Euclid Preparation. TBD. Impact of magnification on spectroscopic galaxy clustering

    Get PDF
    In this paper we investigate the impact of lensing magnification on the analysis of Euclid's spectroscopic survey, using the multipoles of the 2-point correlation function for galaxy clustering. We determine the impact of lensing magnification on cosmological constraints, and the expected shift in the best-fit parameters if magnification is ignored. We consider two cosmological analyses: i) a full-shape analysis based on the Λ\LambdaCDM model and its extension w0waw_0w_aCDM and ii) a model-independent analysis that measures the growth rate of structure in each redshift bin. We adopt two complementary approaches in our forecast: the Fisher matrix formalism and the Markov chain Monte Carlo method. The fiducial values of the local count slope (or magnification bias), which regulates the amplitude of the lensing magnification, have been estimated from the Euclid Flagship simulations. We use linear perturbation theory and model the 2-point correlation function with the public code coffe. For a Λ\LambdaCDM model, we find that the estimation of cosmological parameters is biased at the level of 0.4-0.7 standard deviations, while for a w0waw_0w_aCDM dynamical dark energy model, lensing magnification has a somewhat smaller impact, with shifts below 0.5 standard deviations. In a model-independent analysis aiming to measure the growth rate of structure, we find that the estimation of the growth rate is biased by up to 1.21.2 standard deviations in the highest redshift bin. As a result, lensing magnification cannot be neglected in the spectroscopic survey, especially if we want to determine the growth factor, one of the most promising ways to test general relativity with Euclid. We also find that, by including lensing magnification with a simple template, this shift can be almost entirely eliminated with minimal computational overhead

    Euclid: modelling massive neutrinos in cosmology - a code comparison

    Get PDF
    Material outgassing in a vacuum leads to molecular contamination, a well-known problem in spaceflight. Water is the most common contaminant in cryogenic spacecraft, altering numerous properties of optical systems. Too much ice means that Euclid’s calibration requirements cannot be met anymore. Euclid must then be thermally decontaminated, which is a month-long risky operation. We need to understand how ice affects our data to build adequate calibration and survey plans. A comprehensive analysis in the context of an astrophysical space survey has not been done before. In this paper we look at other spacecraft with well-documented outgassing records. We then review the formation of thin ice films, and find that for Euclid a mix of amorphous and crystalline ices is expected. Their surface topography – and thus optical properties – depend on the competing energetic needs of the substrate-water and the water-water interfaces, and they are hard to predict with current theories. We illustrate that with scanning-tunnelling and atomic-force microscope images of thin ice films. Sophisticated tools exist to compute contamination rates, and we must understand their underlying physical principles and uncertainties. We find considerable knowledge errors on the diffusion and sublimation coefficients, limiting the accuracy of outgassing estimates. We developed a water transport model to compute contamination rates in Euclid, and find agreement with industry estimates within the uncertainties. Tests of the Euclid flight hardware in space simulators did not pick up significant contamination signals, but they were also not geared towards this purpose; our in-flight calibration observations will be much more sensitive. To derive a calibration and decontamination strategy, we need to understand the link between the amount of ice in the optics and its effect on the data. There is little research about this, possibly because other spacecraft can decontaminate more easily, quenching the need for a deeper understanding. In our second paper, we quantify the impact of iced optics on Euclid’s data

    Euclid preparation. XXXII. Evaluating the weak lensing cluster mass biases using the Three Hundred Project hydrodynamical simulations

    Get PDF
    The photometric catalogue of galaxy clusters extracted from ESA Euclid data is expected to be very competitive for cosmological studies. Using state-of-the-art hydrodynamical simulations, we present systematic analyses simulating the expected weak lensing profiles from clusters in a variety of dynamic states and at wide range of redshifts. In order to derive cluster masses, we use a model consistent with the implementation within the Euclid Consortium of the dedicated processing function and find that, when jointly modelling mass and the concentration parameter of the Navarro-Frenk-White halo profile, the weak lensing masses tend to be, on average, biased low by 5-10% with respect to the true mass, up to z=0.5. Using a fixed value for the concentration c200=3c_{200} = 3, the mass bias is diminished below 5%, up to z=0.7, along with its relative uncertainty. Simulating the weak lensing signal by projecting along the directions of the axes of the moment of inertia tensor ellipsoid, we find that orientation matters: when clusters are oriented along the major axis, the lensing signal is boosted, and the recovered weak lensing mass is correspondingly overestimated. Typically, the weak lensing mass bias of individual clusters is modulated by the weak lensing signal-to-noise ratio, related to the redshift evolution of the number of galaxies used for weak lensing measurements: the negative mass bias tends to be larger toward higher redshifts. However, when we use a fixed value of the concentration parameter, the redshift evolution trend is reduced. These results provide a solid basis for the weak-lensing mass calibration required by the cosmological application of future cluster surveys from Euclid and Rubin
    • …
    corecore