539 research outputs found
Revised planet brightness temperatures using the Planck /LFI 2018 data release
Aims. We present new estimates of the brightness temperatures of Jupiter, Saturn, Uranus, and Neptune based on the measurements carried in 2009-2013 by Planck/LFI at 30, 44, and 70 GHz and released to the public in 2018. This work extends the results presented in the 2013 and 2015 Planck/LFI Calibration Papers, based on the data acquired in 2009-2011. Methods. Planck observed each planet up to eight times during the nominal mission. We processed time-ordered data from the 22 LFI radiometers to derive planet antenna temperatures for each planet and transit. We accounted for the beam shape, radiometer bandpasses, and several systematic effects. We compared our results with the results from the ninth year of WMAP, Planck/HFI observations, and existing data and models for planetary microwave emissivity. Results. For Jupiter, we obtain Tb = 144.9, 159.8, 170.5 K (\ub1 0.2 K at 1\u3c3, with temperatures expressed using the Rayleigh-Jeans scale) at 30, 44 and 70 GHz, respectively, or equivalently a band averaged Planck temperature Tb(ba) = 144.7, 160.3, 171.2 K in good agreement with WMAP and existing models. A slight excess at 30 GHz with respect to models is interpreted as an effect of synchrotron emission. Our measures for Saturn agree with the results from WMAP for rings Tb = 9.2 \ub1 1.4, 12.6 \ub1 2.3, 16.2 \ub1 0.8 K, while for the disc we obtain Tb = 140.0 \ub1 1.4, 147.2 \ub1 1.2, 150.2 \ub1 0.4 K, or equivalently a Tb(ba) = 139.7, 147.8, 151.0 K. Our measures for Uranus (Tb = 152 \ub1 6, 145 \ub1 3, 132.0 \ub1 2 K, or Tb(ba) = 152, 145, 133 K) and Neptune (Tb = 154 \ub1 11, 148 \ub1 9, 128 \ub1 3 K, or Tb(ba) = 154, 149, 128 K) agree closely with WMAP and previous data in literature
XXXIV. The effect of linear redshift-space distortions in photometric galaxy clustering and its cross-correlation with cosmic shear
CONTEXT: The cosmological surveys that are planned for the current decade will provide us with unparalleled observations of the distribution of galaxies on cosmic scales, by means of which we can probe the underlying large-scale structure (LSS) of the Universe. This will allow us to test the concordance cosmological model and its extensions. However, precision pushes us to high levels of accuracy in the theoretical modelling of the LSS observables, so that no biases are introduced into the estimation of the cosmological parameters. In particular, effects such as redshift-space distortions (RSD) can become relevant in the computation of harmonic-space power spectra even for the clustering of the photometrically selected galaxies, as has previously been shown in literature. AIMS: In this work, we investigate the contribution of linear RSD, as formulated in the Limber approximation by a previous work, in forecast cosmological analyses with the photometric galaxy sample of the Euclid survey. We aim to assess their impact and to quantify the bias on the measurement of cosmological parameters that would be caused if this effect were neglected. METHODS: We performed this task by producing mock power spectra for photometric galaxy clustering and weak lensing, as is expected to be obtained from the Euclid survey. We then used a Markov chain Monte Carlo approach to obtain the posterior distributions of cosmological parameters from these simulated observations. RESULTS: When the linear RSD is neglected, significant biases are caused when galaxy correlations are used alone and when they are combined with cosmic shear in the so-called 3 × 2 pt approach. These biases can be equivalent to as much as 5σ when an underlying ΛCDM cosmology is assumed. When the cosmological model is extended to include the equation-of-state parameters of dark energy, the extension parameters can be shifted by more than 1σ
Euclid preparation: XXXII. Evaluating the weak-lensing cluster mass biases using the Three Hundred Project hydrodynamical simulations
The photometric catalogue of galaxy clusters extracted from ESA Euclid data is expected to be very competitive for cosmological studies. Using dedicated hydrodynamical simulations, we present systematic analyses simulating the expected weak-lensing profiles from clusters in a variety of dynamic states and for a wide range of redshifts. In order to derive cluster masses, we use a model consistent with the implementation within the Euclid Consortium of the dedicated processing function and find that when we jointly model the mass and concentration parameter of the Navarro–Frenk–White halo profile, the weak-lensing masses tend to be biased low by 5–10% on average with respect to the true mass, up to z = 0.5. For a fixed value for the concentration c200 = 3, the mass bias is decreases to lower than 5%, up to z = 0.7, along with the relative uncertainty. Simulating the weak-lensing signal by projecting along the directions of the axes of the moment of inertia tensor ellipsoid, we find that orientation matters: when clusters are oriented along the major axis, the lensing signal is boosted, and the recovered weak-lensing mass is correspondingly overestimated. Typically, the weak-lensing mass bias of individual clusters is modulated by the weak-lensing signal-to-noise ratio, which is related to the redshift evolution of the number of galaxies used for weak-lensing measurements: the negative mass bias tends to be stronger toward higher redshifts. However, when we use a fixed value of the concentration parameter, the redshift evolution trend is reduced. These results provide a solid basis for the weak-lensing mass calibration required by the cosmological application of future cluster surveys from Euclid and Rubin
Euclid preparation: XXIV. Calibration of the halo mass function in (?)CDM cosmologies
Euclid s photometric galaxy cluster survey has the potential to be a very competitive cosmological probe. The main cosmological probe with observations of clusters is their number count, within which the halo mass function (HMF) is a key theoretical quantity. We present a new calibration of the analytic HMF, at the level of accuracy and precision required for the uncertainty in this quantity to be subdominant with respect to other sources of uncertainty in recovering cosmological parameters from Euclid cluster counts. Our model is calibrated against a suite of N-body simulations using a Bayesian approach taking into account systematic errors arising from numerical effects in the simulation. First, we test the convergence of HMF predictions from different N-body codes, by using initial conditions generated with different orders of Lagrangian Perturbation theory, and adopting different simulation box sizes and mass resolution. Then, we quantify the effect of using different halo finder algorithms, and how the resulting differences propagate to the cosmological constraints. In order to trace the violation of universality in the HMF, we also analyse simulations based on initial conditions characterised by scale-free power spectra with different spectral indexes, assuming both Einsteinde Sitter and standard CDM expansion histories. Based on these results, we construct a fitting function for the HMF that we demonstrate to be sub-percent accurate in reproducing results from 9 different variants of the CDM model including massive neutrinos cosmologies. The calibration systematic uncertainty is largely sub-dominant with respect to the expected precision of future massobservation relations; with the only notable exception of the effect due to the halo finder, that could lead to biased cosmological inference
Euclid: Constraining ensemble photometric redshift distributions with stacked spectroscopy
Context. The ESA Euclid mission will produce photometric galaxy samples over 15 000 square degrees of the sky that will be rich for clustering and weak lensing statistics. The accuracy of the cosmological constraints derived from these measurements will depend on the knowledge of the underlying redshift distributions based on photometric redshift calibrations. Aims. A new approach is proposed to use the stacked spectra from Euclid slitless spectroscopy to augment broad-band photometric information to constrain the redshift distribution with spectral energy distribution fitting. The high spectral resolution available in the stacked spectra complements the photometry and helps to break the colour-redshift degeneracy and constrain the redshift distribution of galaxy samples. Methods. We modelled the stacked spectra as a linear mixture of spectral templates. The mixture may be inverted to infer the underlying redshift distribution using constrained regression algorithms. We demonstrate the method on simulated Vera C. Rubin Observatory and Euclid mock survey data sets based on the Euclid Flagship mock galaxy catalogue. We assess the accuracy of the reconstruction by considering the inference of the baryon acoustic scale from angular two-point correlation function measurements. Results. We selected mock photometric galaxy samples at redshift za>a1 using the self-organising map algorithm. Considering the idealised case without dust attenuation, we find that the redshift distributions of these samples can be recovered with 0.5% accuracy on the baryon acoustic scale. The estimates are not significantly degraded by the spectroscopic measurement noise due to the large sample size. However, the error degrades to 2% when the dust attenuation model is left free. We find that the colour degeneracies introduced by attenuation limit the accuracy considering the wavelength coverage of Euclid near-infrared spectroscopy
Euclid: Identifying the reddest high-redshift galaxies in the Euclid Deep Fields with gradient-boosted trees
Context. ALMA observations show that dusty, distant, massive (M* & 1011 M ) galaxies usually have a remarkable star-formation activity, contributing of the order of 25% of the cosmic star-formation rate density at z 3 5, and up to 30% at z ∼ 7. Nonetheless, they are elusive in classical optical surveys, and current near-IR surveys are able to detect them only in very small sky areas. Since these objects have low space densities, deep and wide surveys are necessary to obtain statistically relevant results about them. Euclid will potentially be capable of delivering the required information, but, given the lack of spectroscopic features at these distances within its bands, it is still unclear if Euclid will be able to identify and characterise these objects. Aims. The goal of this work is to assess the capability of Euclid, together with ancillary optical and near-IR data, to identify these distant, dusty, and massive galaxies based on broadband photometry. Methods. We used a gradient-boosting algorithm to predict both the redshift and spectral type of objects at high z. To perform such an analysis, we made use of simulated photometric observations that mimic the Euclid Deep Survey, derived using the state-of-the-art Spectro-Photometric Realizations of Infrared-selected Targets at all-z (SPRITZ) software. Results. The gradient-boosting algorithm was found to be accurate in predicting both the redshift and spectral type of objects within the simulated Euclid Deep Survey catalogue at z > 2, while drastically decreasing the runtime with respect to spectral-energy-distribution-fitting methods. In particular, we studied the analogue of HIEROs (i.e. sources selected on the basis of a red H - [4:5] > 2:25), combining Euclid and Spitzer data at the depth of the Deep Fields. These sources include the bulk of obscured and massive galaxies in a broad redshift range, 3 < z < 7. We find that the dusty population at 3 . z . 7 is well identified, with a redshift root mean squared error and catastrophic outlier fraction of only 0:55 and 8:5% (HE = 26), respectively. Our findings suggest that with Euclid we will obtain meaningful insights into the impact of massive and dusty galaxies on the cosmic star-formation rate over time
Euclid preparation: V. Predicted yield of redshift 7<z<9 quasars from the wide survey
We provide predictions of the yield of 7 < z < 9 quasars from the Euclid wide survey, updating the calculation presented in the
Euclid Red Book in several ways. We account for revisions to the Euclid near-infrared filter wavelengths; we adopt steeper rates
of decline of the quasar luminosity function (QLF; Φ) with redshift, Φ ∝ 10k(z−6)
, k = −0.72, and a further steeper rate of decline,
k = −0.92; we use better models of the contaminating populations (MLT dwarfs and compact early-type galaxies); and we make use
of an improved Bayesian selection method, compared to the colour cuts used for the Red Book calculation, allowing the identification
of fainter quasars, down to JAB ∼ 23. Quasars at z > 8 may be selected from Euclid OY JH photometry alone, but selection over
the redshift interval 7 < z < 8 is greatly improved by the addition of z-band data from, e.g., Pan-STARRS and LSST. We calculate
predicted quasar yields for the assumed values of the rate of decline of the QLF beyond z = 6. If the decline of the QLF accelerates
beyond z = 6, with k = −0.92, Euclid should nevertheless find over 100 quasars with 7.0 < z < 7.5, and ∼ 25 quasars beyond the
current record of z = 7.5, including ∼ 8 beyond z = 8.0. The first Euclid quasars at z > 7.5 should be found in the DR1 data release,
expected in 2024. It will be possible to determine the bright-end slope of the QLF, 7 < z < 8, M1450 < −25, using 8 m class telescopes
to confirm candidates, but follow-up with JWST or E-ELT will be required to measure the faint-end slope. Contamination of the
candidate lists is predicted to be modest even at JAB ∼ 23. The precision with which k can be determined over 7 < z < 8 depends on
the value of k, but assuming k = −0.72 it can be measured to a 1σ uncertainty of 0.07
Euclid: Identification of asteroid streaks in simulated images using deep learning
The material composition of asteroids is an essential piece of knowledge in the quest to understand the formation and evolution of the Solar System. Visual to near-infrared spectra or multiband photometry is required to constrain the material composition of asteroids, but we currently have such data, especially in the near-infrared wavelengths, for only a limited number of asteroids. This is a significant limitation considering the complex orbital structures of the asteroid populations. Up to 150 000 asteroids will be visible in the images of the upcoming ESA Euclid space telescope, and the instruments of Euclid will offer multiband visual to near-infrared photometry and slitless near-infrared spectra of these objects. Most of the asteroids will appear as streaks in the images. Due to the large number of images and asteroids, automated detection methods are needed. A non-machine-learning approach based on the StreakDet software was previously tested, but the results were not optimal for short and/or faint streaks. We set out to improve the capability to detect asteroid streaks in Euclid images by using deep learning. We built, trained, and tested a three-step machine-learning pipeline with simulated Euclid images. First, a convolutional neural network (CNN) detected streaks and their coordinates in full images, aiming to maximize the completeness (recall) of detections. Then, a recurrent neural network (RNN) merged snippets of long streaks detected in several parts by the CNN. Lastly, gradient-boosted trees (XGBoost) linked detected streaks between different Euclid exposures to reduce the number of false positives and improve the purity (precision) of the sample. The deep-learning pipeline surpasses the completeness and reaches a similar level of purity of a non-machine-learning pipeline based on the StreakDet software. Additionally, the deep-learning pipeline can detect asteroids 0.25-0.5 magnitudes fainter than StreakDet. The deep-learning pipeline could result in a 50% increase in the number of detected asteroids compared to the StreakDet software. There is still scope for further refinement, particularly in improving the accuracy of streak coordinates and enhancing the completeness of the final stage of the pipeline, which involves linking detections across multiple exposures
Euclid preparation: V. Predicted yield of redshift 7 < z < 9 quasars from the wide survey
We provide predictions of the yield of 7 8 may be selected from Euclid OY JH photometry alone, but selection over the redshift interval 7 7.5 should be found in the DR1 data release, expected in 2024. It will be possible to determine the bright-end slope of the QLF, 7 < z < 8, M1450 < −25, using 8 m class telescopes to confirm candidates, but follow-up with JWST or E-ELT will be required to measure the faint-end slope. Contamination of the candidate lists is predicted to be modest even at JAB ∼ 23. The precision with which k can be determined over 7 < z < 8 depends on the value of k, but assuming k = −0.72 it can be measured to a 1σ uncertainty of 0.07
Euclid preparation: VII. Forecast validation for Euclid cosmological probes
Aims. The Euclid space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. The estimation of the expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on various individual methodologies and numerical implementations, which were developed for different observational probes and for the combination thereof. In this paper we present validated forecasts, which combine both theoretical and observational ingredients for different cosmological probes. This work is presented to provide the community with reliable numerical codes and methods for Euclid cosmological forecasts. Methods. We describe in detail the methods adopted for Fisher matrix forecasts, which were applied to galaxy clustering, weak lensing, and the combination thereof. We estimated the required accuracy for Euclid forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations, some of which are made publicly available, in different programming languages, together with a reference training-set of input and output matrices for a set of specific models. These can be used by the reader to validate their own implementations if required. Results. We present new cosmological forecasts for Euclid. We find that results depend on the specific cosmological model and remaining freedom in each setting, for example flat or non-flat spatial cosmologies, or different cuts at non-linear scales. The numerical implementations are now reliable for these settings. We present the results for an optimistic and a pessimistic choice for these types of settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy figure of merit by at least a factor of three
- …