335 research outputs found

    Revisiting the Glick-Rogoff Current Account Model: An Application to the Current Accounts of BRICS Countries

    Get PDF
    Understanding what drives the changes in current accounts is one of the most important macroeconomic issues for developing countries. Excessive surpluses in current accounts can trigger trade wars, and excessive deficits in current accounts can, on the other hand, induce currency crises. The Glick-Rogoff (1995, Journal of Monetary Economics) model, which emphasizes productivity shocks at home and in the world, fit well with developed economies in the 1970s and 1980s. However, the Glick-Rogoff model fits poorly when it is applied to fast-growing BRICS countries for the period including the global financial crisis. We conclude that different mechanisms of current accounts work for developed and developing countries

    Planck 2015 results. V. LFI calibration

    Get PDF
    We present a description of the pipeline used to calibrate the Planck Low Frequency Instrument (LFI) timelines into thermodynamic temperatures for the Planck 2015 data release, covering four years of uninterrupted operations. As in the 2013 data release, our calibrator is provided by the spin-synchronous modulation of the cosmic microwave background dipole, but we now use the orbital component, rather than adopting the Wilkinson Microwave Anisotropy Probe (WMAP) solar dipole. This allows our 2015 LFI analysis to provide an independent Solar dipole estimate, which is in excellent agreement with that of HFI and within 1σ (0.3% in amplitude) of the WMAP value. This 0.3% shift in the peak-to-peak dipole temperature from WMAP and a general overhaul of the iterative calibration code increases the overall level of the LFI maps by 0.45% (30 GHz), 0.64% (44 GHz), and 0.82% (70 GHz) in temperature with respect to the 2013 Planck data release, thus reducing the discrepancy with the power spectrum measured by WMAP. We estimate that the LFI calibration uncertainty is now at the level of 0.20% for the 70 GHz map, 0.26% for the 44 GHz map, and 0.35% for the 30 GHz map. We provide a detailed description of the impact of all the changes implemented in the calibration since the previous data release

    Euclid: modelling massive neutrinos in cosmology - a code comparison

    Get PDF

    Euclid: Constraining ensemble photometric redshift distributions with stacked spectroscopy

    Get PDF
    Context. The ESA Euclid mission will produce photometric galaxy samples over 15 000 square degrees of the sky that will be rich for clustering and weak lensing statistics. The accuracy of the cosmological constraints derived from these measurements will depend on the knowledge of the underlying redshift distributions based on photometric redshift calibrations. Aims. A new approach is proposed to use the stacked spectra from Euclid slitless spectroscopy to augment broad-band photometric information to constrain the redshift distribution with spectral energy distribution fitting. The high spectral resolution available in the stacked spectra complements the photometry and helps to break the colour-redshift degeneracy and constrain the redshift distribution of galaxy samples. Methods. We modelled the stacked spectra as a linear mixture of spectral templates. The mixture may be inverted to infer the underlying redshift distribution using constrained regression algorithms. We demonstrate the method on simulated Vera C. Rubin Observatory and Euclid mock survey data sets based on the Euclid Flagship mock galaxy catalogue. We assess the accuracy of the reconstruction by considering the inference of the baryon acoustic scale from angular two-point correlation function measurements. Results. We selected mock photometric galaxy samples at redshift za>a1 using the self-organising map algorithm. Considering the idealised case without dust attenuation, we find that the redshift distributions of these samples can be recovered with 0.5% accuracy on the baryon acoustic scale. The estimates are not significantly degraded by the spectroscopic measurement noise due to the large sample size. However, the error degrades to 2% when the dust attenuation model is left free. We find that the colour degeneracies introduced by attenuation limit the accuracy considering the wavelength coverage of Euclid near-infrared spectroscopy

    Euclid: Modelling massive neutrinos in cosmology -- a code comparison

    Get PDF
    The measurement of the absolute neutrino mass scale from cosmological large-scale clustering data is one of the key science goals of the Euclid mission. Such a measurement relies on precise modelling of the impact of neutrinos on structure formation, which can be studied with NN-body simulations. Here we present the results from a major code comparison effort to establish the maturity and reliability of numerical methods for treating massive neutrinos. The comparison includes eleven full NN-body implementations (not all of them independent), two NN-body schemes with approximate time integration, and four additional codes that directly predict or emulate the matter power spectrum. Using a common set of initial data we quantify the relative agreement on the nonlinear power spectrum of cold dark matter and baryons and, for the NN-body codes, also the relative agreement on the bispectrum, halo mass function, and halo bias. We find that the different numerical implementations produce fully consistent results. We can therefore be confident that we can model the impact of massive neutrinos at the sub-percent level in the most common summary statistics. We also provide a code validation pipeline for future reference.Comment: 43 pages, 17 figures, 2 tables; published on behalf of the Euclid Consortium; data available at https://doi.org/10.5281/zenodo.729797

    Euclid preparation - VII. Forecast validation for Euclid cosmological probes

    Get PDF
    Aims. The Euclid space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for Euclid cosmological forecasts. Methods. We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for Euclid forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required. Results. We present new cosmological forecasts for Euclid. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three

    Euclid preparation: VII. Forecast validation for Euclid cosmological probes

    Get PDF
    Aims. The Euclid space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for Euclid cosmological forecasts. Methods. We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for Euclid forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required. Results. We present new cosmological forecasts for Euclid. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three

    Euclid preparation: XVI. Exploring the ultra-low surface brightness Universe with Euclid /VIS

    Get PDF
    Context. While Euclid is an ESA mission specifically designed to investigate the nature of dark energy and dark matter, the planned unprecedented combination of survey area (∼15â 000 deg2), spatial resolution, low sky-background, and depth also make Euclid an excellent space observatory for the study of the low surface brightness Universe. Scientific exploitation of the extended low surface brightness structures requires dedicated calibration procedures that are yet to be tested. Aims. We investigate the capabilities of Euclid to detect extended low surface brightness structure by identifying and quantifying sky-background sources and stray-light contamination. We test the feasibility of generating sky flat-fields to reduce large-scale residual gradients in order to reveal the extended emission of galaxies observed in the Euclid survey. Methods. We simulated a realistic set of Euclid/VIS observations, taking into account both instrumental and astronomical sources of contamination, including cosmic rays, stray-light, zodiacal light, interstellar medium, and the cosmic infrared background, while simulating the effects of background sources in the field of view. Results. We demonstrate that a combination of calibration lamps, sky flats, and self-calibration would enable recovery of emission at a limiting surface brightness magnitude of μlim = 29.5-0.27+0.08 mag arcsec-2 (3σ, 10â ×â 10 arcsec2) in the Wide Survey, and it would reach regions deeper by 2 mag in the Deep Surveys. Conclusions.Euclid/VIS has the potential to be an excellent low surface brightness observatory. Covering the gap between pixel-To-pixel calibration lamp flats and self-calibration observations for large scales, the application of sky flat-fielding will enhance the sensitivity of the VIS detector at scales larger than 1″, up to the size of the field of view, enabling Euclid to detect extended surface brightness structures below μlimâ =â 31 mag arcsec-2 and beyond

    Euclid preparation. XXXI. The effect of the variations in photometric passbands on photometric-redshift accuracy

    Full text link
    The technique of photometric redshifts has become essential for the exploitation of multi-band extragalactic surveys. While the requirements on photo-zs for the study of galaxy evolution mostly pertain to the precision and to the fraction of outliers, the most stringent requirement in their use in cosmology is on the accuracy, with a level of bias at the sub-percent level for the Euclid cosmology mission. A separate, and challenging, calibration process is needed to control the bias at this level of accuracy. The bias in photo-zs has several distinct origins that may not always be easily overcome. We identify here one source of bias linked to the spatial or time variability of the passbands used to determine the photometric colours of galaxies. We first quantified the effect as observed on several well-known photometric cameras, and found in particular that, due to the properties of optical filters, the redshifts of off-axis sources are usually overestimated. We show using simple simulations that the detailed and complex changes in the shape can be mostly ignored and that it is sufficient to know the mean wavelength of the passbands of each photometric observation to correct almost exactly for this bias; the key point is that this mean wavelength is independent of the spectral energy distribution of the source}. We use this property to propose a correction that can be computationally efficiently implemented in some photo-z algorithms, in particular template-fitting. We verified that our algorithm, implemented in the new photo-z code Phosphoros, can effectively reduce the bias in photo-zs on real data using the CFHTLS T007 survey, with an average measured bias Delta z over the redshift range 0.4<z<0.7 decreasing by about 0.02, specifically from Delta z~0.04 to Delta z~0.02 around z=0.5. Our algorithm is also able to produce corrected photometry for other applications.Comment: 19 pages, 13 figures; Accepted for publication in A&

    Euclid: Forecasts from redshift-space distortions and the Alcock-Paczynski test with cosmic voids

    Get PDF
    Euclid is poised to survey galaxies across a cosmological volume of unprecedented size, providing observations of more than a billion objects distributed over a third of the full sky. Approximately 20 million of these galaxies will have their spectroscopy available, allowing us to map the three-dimensional large-scale structure of the Universe in great detail. This paper investigates prospects for the detection of cosmic voids therein and the unique benefit they provide for cosmological studies. In particular, we study the imprints of dynamic (redshift-space) and geometric (Alcock-Paczynski) distortions of average void shapes and their constraining power on the growth of structure and cosmological distance ratios. To this end, we made use of the Flagship mock catalog, a state-of-the-art simulation of the data expected to be observed with Euclid. We arranged the data into four adjacent redshift bins, each of which contains about 11000 voids and we estimated the stacked void-galaxy cross-correlation function in every bin. Fitting a linear-theory model to the data, we obtained constraints on f/b and DMH, where f is the linear growth rate of density fluctuations, b the galaxy bias, D-M the comoving angular diameter distance, and H the Hubble rate. In addition, we marginalized over two nuisance parameters included in our model to account for unknown systematic effects in the analysis. With this approach, Euclid will be able to reach a relative precision of about 4% on measurements of f/b and 0.5% on DMH in each redshift bin. Better modeling or calibration of the nuisance parameters may further increase this precision to 1% and 0.4%, respectively. Our results show that the exploitation of cosmic voids in Euclid will provide competitive constraints on cosmology even as a stand-alone probe. For example, the equation-of-state parameter, w, for dark energy will be measured with a precision of about 10%, consistent with previous more approximate forecasts
    corecore