322 research outputs found

    Revised planet brightness temperatures using the Planck /LFI 2018 data release

    Get PDF
    Aims. We present new estimates of the brightness temperatures of Jupiter, Saturn, Uranus, and Neptune based on the measurements carried in 2009-2013 by Planck/LFI at 30, 44, and 70 GHz and released to the public in 2018. This work extends the results presented in the 2013 and 2015 Planck/LFI Calibration Papers, based on the data acquired in 2009-2011. Methods. Planck observed each planet up to eight times during the nominal mission. We processed time-ordered data from the 22 LFI radiometers to derive planet antenna temperatures for each planet and transit. We accounted for the beam shape, radiometer bandpasses, and several systematic effects. We compared our results with the results from the ninth year of WMAP, Planck/HFI observations, and existing data and models for planetary microwave emissivity. Results. For Jupiter, we obtain Tb = 144.9, 159.8, 170.5 K (\ub1 0.2 K at 1\u3c3, with temperatures expressed using the Rayleigh-Jeans scale) at 30, 44 and 70 GHz, respectively, or equivalently a band averaged Planck temperature Tb(ba) = 144.7, 160.3, 171.2 K in good agreement with WMAP and existing models. A slight excess at 30 GHz with respect to models is interpreted as an effect of synchrotron emission. Our measures for Saturn agree with the results from WMAP for rings Tb = 9.2 \ub1 1.4, 12.6 \ub1 2.3, 16.2 \ub1 0.8 K, while for the disc we obtain Tb = 140.0 \ub1 1.4, 147.2 \ub1 1.2, 150.2 \ub1 0.4 K, or equivalently a Tb(ba) = 139.7, 147.8, 151.0 K. Our measures for Uranus (Tb = 152 \ub1 6, 145 \ub1 3, 132.0 \ub1 2 K, or Tb(ba) = 152, 145, 133 K) and Neptune (Tb = 154 \ub1 11, 148 \ub1 9, 128 \ub1 3 K, or Tb(ba) = 154, 149, 128 K) agree closely with WMAP and previous data in literature

    Revisiting the Glick-Rogoff Current Account Model: An Application to the Current Accounts of BRICS Countries

    Get PDF
    Understanding what drives the changes in current accounts is one of the most important macroeconomic issues for developing countries. Excessive surpluses in current accounts can trigger trade wars, and excessive deficits in current accounts can, on the other hand, induce currency crises. The Glick-Rogoff (1995, Journal of Monetary Economics) model, which emphasizes productivity shocks at home and in the world, fit well with developed economies in the 1970s and 1980s. However, the Glick-Rogoff model fits poorly when it is applied to fast-growing BRICS countries for the period including the global financial crisis. We conclude that different mechanisms of current accounts work for developed and developing countries

    Planck 2015 results. V. LFI calibration

    Get PDF
    We present a description of the pipeline used to calibrate the Planck Low Frequency Instrument (LFI) timelines into thermodynamic temperatures for the Planck 2015 data release, covering four years of uninterrupted operations. As in the 2013 data release, our calibrator is provided by the spin-synchronous modulation of the cosmic microwave background dipole, but we now use the orbital component, rather than adopting the Wilkinson Microwave Anisotropy Probe (WMAP) solar dipole. This allows our 2015 LFI analysis to provide an independent Solar dipole estimate, which is in excellent agreement with that of HFI and within 1σ (0.3% in amplitude) of the WMAP value. This 0.3% shift in the peak-to-peak dipole temperature from WMAP and a general overhaul of the iterative calibration code increases the overall level of the LFI maps by 0.45% (30 GHz), 0.64% (44 GHz), and 0.82% (70 GHz) in temperature with respect to the 2013 Planck data release, thus reducing the discrepancy with the power spectrum measured by WMAP. We estimate that the LFI calibration uncertainty is now at the level of 0.20% for the 70 GHz map, 0.26% for the 44 GHz map, and 0.35% for the 30 GHz map. We provide a detailed description of the impact of all the changes implemented in the calibration since the previous data release

    Euclid preparation: VIII. The Complete Calibration of the Colour–Redshift Relation survey: VLT/KMOS observations and data release

    Get PDF
    The Complete Calibration of the Colour–Redshift Relation survey (C3R2) is a spectroscopic effort involving ESO and Keck facilities designed specifically to empirically calibrate the galaxy colour–redshift relation – P(z|C) to the Euclid depth (iAB = 24.5) and is intimately linked to the success of upcoming Stage IV dark energy missions based on weak lensing cosmology. The aim is to build a spectroscopic calibration sample that is as representative as possible of the galaxies of the Euclid weak lensing sample. In order to minimise the number of spectroscopic observations necessary to fill the gaps in current knowledge of the P(z|C), self-organising map (SOM) representations of the galaxy colour space have been constructed. Here we present the first results of an ESO@VLT Large Programme approved in the context of C3R2, which makes use of the two VLT optical and near-infrared multi-object spectrographs, FORS2 and KMOS. This data release paper focuses on high-quality spectroscopic redshifts of high-redshift galaxies observed with the KMOS spectrograph in the near-infrared H- and K-bands. A total of 424 highly-reliable redshifts are measured in the 1.3 ≀ z ≀ 2.5 range, with total success rates of 60.7% in the H-band and 32.8% in the K-band. The newly determined redshifts fill 55% of high (mainly regions with no spectroscopic measurements) and 35% of lower (regions with low-resolution/low-quality spectroscopic measurements) priority empty SOM grid cells. We measured Hα fluxes in a 1. 002 radius aperture from the spectra of the spectroscopically confirmed galaxies and converted them into star formation rates. In addition, we performed an SED fitting analysis on the same sample in order to derive stellar masses, E(B − V), total magnitudes, and SFRs. We combine the results obtained from the spectra with those derived via SED fitting, and we show that the spectroscopic failures come from either weakly star-forming galaxies (at z 2 galaxies

    Euclid preparation - VII. Forecast validation for Euclid cosmological probes

    Get PDF
    Aims. The Euclid space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for Euclid cosmological forecasts. Methods. We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for Euclid forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required. Results. We present new cosmological forecasts for Euclid. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three

    Euclid preparation: VII. Forecast validation for Euclid cosmological probes

    Get PDF
    Aims. The Euclid space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for Euclid cosmological forecasts. Methods. We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for Euclid forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required. Results. We present new cosmological forecasts for Euclid. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three

    Euclid preparation: V. Predicted yield of redshift 7<z<9 quasars from the wide survey

    Get PDF
    We provide predictions of the yield of 7 < z < 9 quasars from the Euclid wide survey, updating the calculation presented in the Euclid Red Book in several ways. We account for revisions to the Euclid near-infrared filter wavelengths; we adopt steeper rates of decline of the quasar luminosity function (QLF; Ί) with redshift, Ί ∝ 10k(z−6) , k = −0.72, and a further steeper rate of decline, k = −0.92; we use better models of the contaminating populations (MLT dwarfs and compact early-type galaxies); and we make use of an improved Bayesian selection method, compared to the colour cuts used for the Red Book calculation, allowing the identification of fainter quasars, down to JAB ∌ 23. Quasars at z > 8 may be selected from Euclid OY JH photometry alone, but selection over the redshift interval 7 < z < 8 is greatly improved by the addition of z-band data from, e.g., Pan-STARRS and LSST. We calculate predicted quasar yields for the assumed values of the rate of decline of the QLF beyond z = 6. If the decline of the QLF accelerates beyond z = 6, with k = −0.92, Euclid should nevertheless find over 100 quasars with 7.0 < z < 7.5, and ∌ 25 quasars beyond the current record of z = 7.5, including ∌ 8 beyond z = 8.0. The first Euclid quasars at z > 7.5 should be found in the DR1 data release, expected in 2024. It will be possible to determine the bright-end slope of the QLF, 7 < z < 8, M1450 < −25, using 8 m class telescopes to confirm candidates, but follow-up with JWST or E-ELT will be required to measure the faint-end slope. Contamination of the candidate lists is predicted to be modest even at JAB ∌ 23. The precision with which k can be determined over 7 < z < 8 depends on the value of k, but assuming k = −0.72 it can be measured to a 1σ uncertainty of 0.07

    Euclid preparation: XII. Optimizing the photometric sample of the Euclid survey for galaxy clustering and galaxy-galaxy lensing analyses

    Get PDF
    Photometric redshifts (photo-zs) are one of the main ingredients in the analysis of cosmological probes. Their accuracy particularly affects the results of the analyses of galaxy clustering with photometrically selected galaxies (GCph) and weak lensing. In the next decade, space missions such as Euclid will collect precise and accurate photometric measurements for millions of galaxies. These data should be complemented with upcoming ground-based observations to derive precise and accurate photo-zs. In this article we explore how the tomographic redshift binning and depth of ground-based observations will affect the cosmological constraints expected from the Euclid mission. We focus on GCph and extend the study to include galaxy-galaxy lensing (GGL). We add a layer of complexity to the analysis by simulating several realistic photo-z distributions based on the Euclid Consortium Flagship simulation and using a machine learning photo-z algorithm. We then use the Fisher matrix formalism together with these galaxy samples to study the cosmological constraining power as a function of redshift binning, survey depth, and photo-z accuracy. We find that bins with an equal width in redshift provide a higher figure of merit (FoM) than equipopulated bins and that increasing the number of redshift bins from ten to 13 improves the FoM by 35% and 15% for GCph and its combination with GGL, respectively. For GCph, an increase in the survey depth provides a higher FoM. However, when we include faint galaxies beyond the limit of the spectroscopic training data, the resulting FoM decreases because of the spurious photo-zs. When combining GCph and GGL, the number density of the sample, which is set by the survey depth, is the main factor driving the variations in the FoM. Adding galaxies at faint magnitudes and high redshift increases the FoM, even when they are beyond the spectroscopic limit, since the number density increase compensates for the photo-z degradation in this case. We conclude that there is more information that can be extracted beyond the nominal ten tomographic redshift bins of Euclid and that we should be cautious when adding faint galaxies into our sample since they can degrade the cosmological constraints

    Euclid preparation: XII. Optimizing the photometric sample of the Euclid survey for galaxy clustering and galaxy-galaxy lensing analyses

    Get PDF
    Photometric redshifts (photo-zs) are one of the main ingredients in the analysis of cosmological probes. Their accuracy particularly affects the results of the analyses of galaxy clustering with photometrically selected galaxies (GCph) and weak lensing. In the next decade, space missions such as Euclid will collect precise and accurate photometric measurements for millions of galaxies. These data should be complemented with upcoming ground-based observations to derive precise and accurate photo-zs. In this article we explore how the tomographic redshift binning and depth of ground-based observations will affect the cosmological constraints expected from the Euclid mission. We focus on GCph and extend the study to include galaxy-galaxy lensing (GGL). We add a layer of complexity to the analysis by simulating several realistic photo-z distributions based on the Euclid Consortium Flagship simulation and using a machine learning photo-z algorithm. We then use the Fisher matrix formalism together with these galaxy samples to study the cosmological constraining power as a function of redshift binning, survey depth, and photo-z accuracy. We find that bins with an equal width in redshift provide a higher figure of merit (FoM) than equipopulated bins and that increasing the number of redshift bins from ten to 13 improves the FoM by 35% and 15% for GCph and its combination with GGL, respectively. For GCph, an increase in the survey depth provides a higher FoM. However, when we include faint galaxies beyond the limit of the spectroscopic training data, the resulting FoM decreases because of the spurious photo-zs. When combining GCph and GGL, the number density of the sample, which is set by the survey depth, is the main factor driving the variations in the FoM. Adding galaxies at faint magnitudes and high redshift increases the FoM, even when they are beyond the spectroscopic limit, since the number density increase compensates for the photo-z degradation in this case. We conclude that there is more information that can be extracted beyond the nominal ten tomographic redshift bins of Euclid and that we should be cautious when adding faint galaxies into our sample since they can degrade the cosmological constraints

    Euclid: Modelling massive neutrinos in cosmology -- a code comparison

    Get PDF
    The measurement of the absolute neutrino mass scale from cosmological large-scale clustering data is one of the key science goals of the Euclid mission. Such a measurement relies on precise modelling of the impact of neutrinos on structure formation, which can be studied with NN-body simulations. Here we present the results from a major code comparison effort to establish the maturity and reliability of numerical methods for treating massive neutrinos. The comparison includes eleven full NN-body implementations (not all of them independent), two NN-body schemes with approximate time integration, and four additional codes that directly predict or emulate the matter power spectrum. Using a common set of initial data we quantify the relative agreement on the nonlinear power spectrum of cold dark matter and baryons and, for the NN-body codes, also the relative agreement on the bispectrum, halo mass function, and halo bias. We find that the different numerical implementations produce fully consistent results. We can therefore be confident that we can model the impact of massive neutrinos at the sub-percent level in the most common summary statistics. We also provide a code validation pipeline for future reference.Comment: 43 pages, 17 figures, 2 tables; published on behalf of the Euclid Consortium; data available at https://doi.org/10.5281/zenodo.729797
    • 

    corecore