670 research outputs found
The Physics of the Accelerating Universe Survey: narrow-band image photometry
PAUCam is an innovative optical narrow-band imager mounted at the William Herschel Telescope built for the Physics of the Accelerating Universe Survey (PAUS). Its set of 40 filters results in images that are complex to calibrate, with specific instrumental signatures that cannot be processed with traditional data reduction techniques. In this paper, we present two pipelines developed by the PAUS data management team with the objective of producing science-ready catalogues from the uncalibrated raw images. The NIGHTLY pipeline takes care of entire image processing, with bespoke algorithms for photometric calibration and scatter-light correction. The Multi-Epoch and Multi-Band Analysis pipeline performs forced photometry over a reference catalogue to optimize the photometric redshift (photo-z) performance. We verify against spectroscopic observations that the current approach delivers an inter-band photometric calibration of 0.8 per cent across the 40 narrow-band set. The large volume of data produced every night and the rapid survey strategy feedback constraints require operating both pipelines in the Port d’Informació Cientifica data centre with intense parallelization. While alternative algorithms for further improvements in photo-z performance are under investigation, the image calibration and photometry presented in this work already enable state-of-the-art photo-z down to iAB = 23.0
The PAU Survey: Narrow-band image photometry
PAUCam is an innovative optical narrow-band imager mounted at the William
Herschel Telescope built for the Physics of the Accelerating Universe Survey
(PAUS). Its set of 40 filters results in images that are complex to calibrate,
with specific instrumental signatures that cannot be processed with traditional
data reduction techniques. In this paper we present two pipelines developed by
the PAUS data management team with the objective of producing science-ready
catalogues from the uncalibrated raw images. The Nightly pipeline takes care of
all image processing, with bespoke algorithms for photometric calibration and
scatter-light correction. The Multi-Epoch and Multi-Band Analysis (MEMBA)
pipeline performs forced photometry over a reference catalogue to optimize the
photometric redshift performance. We verify against spectroscopic observations
that the current approach delivers an inter-band photometric calibration of
0.8% across the 40 narrow-band set. The large volume of data produced every
night and the rapid survey strategy feedback constraints require operating both
pipelines in the Port d'Informaci\'o Cientifica data centre with intense
parallelization. While alternative algorithms for further improvements in
photo-z performance are under investigation, the image calibration and
photometry presented in this work already enable state-of-the-art photometric
redshifts down to iAB=23.0.Comment: 32 pages, 26 figures, MNRAS in pres
Euclid preparation: XII. Optimizing the photometric sample of the Euclid survey for galaxy clustering and galaxy-galaxy lensing analyses
Photometric redshifts (photo-zs) are one of the main ingredients in the analysis of cosmological probes. Their accuracy particularly affects the results of the analyses of galaxy clustering with photometrically selected galaxies (GCph) and weak lensing. In the next decade, space missions such as Euclid will collect precise and accurate photometric measurements for millions of galaxies. These data should be complemented with upcoming ground-based observations to derive precise and accurate photo-zs. In this article we explore how the tomographic redshift binning and depth of ground-based observations will affect the cosmological constraints expected from the Euclid mission. We focus on GCph and extend the study to include galaxy-galaxy lensing (GGL). We add a layer of complexity to the analysis by simulating several realistic photo-z distributions based on the Euclid Consortium Flagship simulation and using a machine learning photo-z algorithm. We then use the Fisher matrix formalism together with these galaxy samples to study the cosmological constraining power as a function of redshift binning, survey depth, and photo-z accuracy. We find that bins with an equal width in redshift provide a higher figure of merit (FoM) than equipopulated bins and that increasing the number of redshift bins from ten to 13 improves the FoM by 35% and 15% for GCph and its combination with GGL, respectively. For GCph, an increase in the survey depth provides a higher FoM. However, when we include faint galaxies beyond the limit of the spectroscopic training data, the resulting FoM decreases because of the spurious photo-zs. When combining GCph and GGL, the number density of the sample, which is set by the survey depth, is the main factor driving the variations in the FoM. Adding galaxies at faint magnitudes and high redshift increases the FoM, even when they are beyond the spectroscopic limit, since the number density increase compensates for the photo-z degradation in this case. We conclude that there is more information that can be extracted beyond the nominal ten tomographic redshift bins of Euclid and that we should be cautious when adding faint galaxies into our sample since they can degrade the cosmological constraints
Euclid preparation: VIII. The Complete Calibration of the Colour–Redshift Relation survey: VLT/KMOS observations and data release
The Complete Calibration of the Colour–Redshift Relation survey (C3R2) is a spectroscopic effort involving ESO and Keck facilities designed specifically to empirically calibrate the galaxy colour–redshift relation – P(z|C) to the Euclid depth (iAB = 24.5) and is intimately linked to the success of upcoming Stage IV dark energy missions based on weak lensing cosmology. The aim is to build a spectroscopic calibration sample that is as representative as possible of the galaxies of the Euclid weak lensing sample. In order to minimise the number of spectroscopic observations necessary to fill the gaps in current knowledge of the P(z|C), self-organising map (SOM) representations of the galaxy colour space have been constructed. Here we present the first results of an ESO@VLT Large Programme approved in the context of C3R2, which makes use of the two VLT optical and near-infrared multi-object spectrographs, FORS2 and KMOS. This data release paper focuses on high-quality spectroscopic redshifts of high-redshift galaxies observed with the KMOS spectrograph in the near-infrared H- and K-bands. A total of 424 highly-reliable redshifts are measured in the 1.3 ≤ z ≤ 2.5 range, with total success rates of 60.7% in the H-band and 32.8% in the K-band. The newly determined redshifts fill 55% of high (mainly regions with no spectroscopic measurements) and 35% of lower (regions with low-resolution/low-quality spectroscopic measurements) priority empty SOM grid cells. We measured Hα fluxes in a 1. 002 radius aperture from the spectra of the spectroscopically confirmed galaxies and converted them into star formation rates. In addition, we performed an SED fitting analysis on the same sample in order to derive stellar masses, E(B − V), total magnitudes, and SFRs. We combine the results obtained from the spectra with those derived via SED fitting, and we show that the spectroscopic failures come from either weakly star-forming galaxies (at z 2 galaxies
Euclid preparation XLVIII. The pre-launch Science Ground Segment simulation framework
Context. The European Space Agency’s Euclid mission is one of a raft of forthcoming large-scale cosmology surveys that will map the large-scale structure in the Universe with unprecedented precision. The mission will collect a vast amount of data that will be processed and analysed by Euclid’s Science Ground Segment (SGS). The development and validation of the SGS pipeline requires state-of-the-art simulations with a high level of complexity and accuracy that include subtle instrumental features not accounted for previously as well as faster algorithms for the large-scale production of the expected Euclid data products.
//
Aims. In this paper, we present the Euclid SGS simulation framework as it is applied in a large-scale end-to-end simulation exercise named Science Challenge 8. Our simulation pipeline enables the swift production of detailed image simulations for the construction and validation of the Euclid mission during its qualification phase and will serve as a reference throughout operations.
//
Methods. Our end-to-end simulation framework started with the production of a large cosmological N-body simulation that we used to construct a realistic galaxy mock catalogue. We performed a selection of galaxies down to IE=26 and 28 mag, respectively, for a Euclid Wide Survey spanning 165 deg2 and a 1 deg2 Euclid Deep Survey. We built realistic stellar density catalogues containing Milky Way-like stars down to H < 26 from a combination of a stellar population synthesis model of the Galaxy and real bright stars. Using the latest instrumental models for both the Euclid instruments and spacecraft as well as Euclid-like observing sequences, we emulated with high fidelity Euclid satellite imaging throughout the mission’s lifetime.
//
Results. We present the SC8 dataset, consisting of overlapping visible and near-infrared Euclid Wide Survey and Euclid Deep Survey imaging and low-resolution spectroscopy along with ground-based data in five optical bands. This extensive dataset enables end-to-end testing of the entire ground segment data reduction and science analysis pipeline as well as the Euclid mission infrastructure, paving the way for future scientific and technical developments and enhancements
Euclid preparation: XII. Optimizing the photometric sample of the Euclid survey for galaxy clustering and galaxy-galaxy lensing analyses
Photometric redshifts (photo-zs) are one of the main ingredients in the analysis of cosmological probes. Their accuracy particularly affects the results of the analyses of galaxy clustering with photometrically selected galaxies (GCph) and weak lensing. In the next decade, space missions such as Euclid will collect precise and accurate photometric measurements for millions of galaxies. These data should be complemented with upcoming ground-based observations to derive precise and accurate photo-zs. In this article we explore how the tomographic redshift binning and depth of ground-based observations will affect the cosmological constraints expected from the Euclid mission. We focus on GCph and extend the study to include galaxy-galaxy lensing (GGL). We add a layer of complexity to the analysis by simulating several realistic photo-z distributions based on the Euclid Consortium Flagship simulation and using a machine learning photo-z algorithm. We then use the Fisher matrix formalism together with these galaxy samples to study the cosmological constraining power as a function of redshift binning, survey depth, and photo-z accuracy. We find that bins with an equal width in redshift provide a higher figure of merit (FoM) than equipopulated bins and that increasing the number of redshift bins from ten to 13 improves the FoM by 35% and 15% for GCph and its combination with GGL, respectively. For GCph, an increase in the survey depth provides a higher FoM. However, when we include faint galaxies beyond the limit of the spectroscopic training data, the resulting FoM decreases because of the spurious photo-zs. When combining GCph and GGL, the number density of the sample, which is set by the survey depth, is the main factor driving the variations in the FoM. Adding galaxies at faint magnitudes and high redshift increases the FoM, even when they are beyond the spectroscopic limit, since the number density increase compensates for the photo-z degradation in this case. We conclude that there is more information that can be extracted beyond the nominal ten tomographic redshift bins of Euclid and that we should be cautious when adding faint galaxies into our sample since they can degrade the cosmological constraints
Euclid preparation: XVI. Exploring the ultra-low surface brightness Universe with Euclid /VIS
Context. While Euclid is an ESA mission specifically designed to investigate the nature of dark energy and dark matter, the planned unprecedented combination of survey area (∼15â 000 deg2), spatial resolution, low sky-background, and depth also make Euclid an excellent space observatory for the study of the low surface brightness Universe. Scientific exploitation of the extended low surface brightness structures requires dedicated calibration procedures that are yet to be tested. Aims. We investigate the capabilities of Euclid to detect extended low surface brightness structure by identifying and quantifying sky-background sources and stray-light contamination. We test the feasibility of generating sky flat-fields to reduce large-scale residual gradients in order to reveal the extended emission of galaxies observed in the Euclid survey. Methods. We simulated a realistic set of Euclid/VIS observations, taking into account both instrumental and astronomical sources of contamination, including cosmic rays, stray-light, zodiacal light, interstellar medium, and the cosmic infrared background, while simulating the effects of background sources in the field of view. Results. We demonstrate that a combination of calibration lamps, sky flats, and self-calibration would enable recovery of emission at a limiting surface brightness magnitude of μlim = 29.5-0.27+0.08 mag arcsec-2 (3σ, 10â ×â 10 arcsec2) in the Wide Survey, and it would reach regions deeper by 2 mag in the Deep Surveys. Conclusions.Euclid/VIS has the potential to be an excellent low surface brightness observatory. Covering the gap between pixel-To-pixel calibration lamp flats and self-calibration observations for large scales, the application of sky flat-fielding will enhance the sensitivity of the VIS detector at scales larger than 1″, up to the size of the field of view, enabling Euclid to detect extended surface brightness structures below μlimâ =â 31 mag arcsec-2 and beyond
Euclid preparation TBD. The effect of baryons on the Halo Mass Function
The Euclid photometric survey of galaxy clusters stands as a powerful
cosmological tool, with the capacity to significantly propel our understanding
of the Universe. Despite being sub-dominant to dark matter and dark energy, the
baryonic component in our Universe holds substantial influence over the
structure and mass of galaxy clusters. This paper presents a novel model to
precisely quantify the impact of baryons on galaxy cluster virial halo masses,
using the baryon fraction within a cluster as proxy for their effect.
Constructed on the premise of quasi-adiabaticity, the model includes two
parameters calibrated using non-radiative cosmological hydrodynamical
simulations and a single large-scale simulation from the Magneticum set, which
includes the physical processes driving galaxy formation. As a main result of
our analysis, we demonstrate that this model delivers a remarkable one percent
relative accuracy in determining the virial dark matter-only equivalent mass of
galaxy clusters, starting from the corresponding total cluster mass and baryon
fraction measured in hydrodynamical simulations. Furthermore, we demonstrate
that this result is robust against changes in cosmological parameters and
against varying the numerical implementation of the sub-resolution physical
processes included in the simulations. Our work substantiates previous claims
about the impact of baryons on cluster cosmology studies. In particular, we
show how neglecting these effects would lead to biased cosmological constraints
for a Euclid-like cluster abundance analysis. Importantly, we demonstrate that
uncertainties associated with our model, arising from baryonic corrections to
cluster masses, are sub-dominant when compared to the precision with which
mass-observable relations will be calibrated using Euclid, as well as our
current understanding of the baryon fraction within galaxy clusters.Comment: 18 pages, 10 figures, 4 tables, 1 appendix, abstract abridged for
arXiv submissio
Euclid preparation: XXXV. Covariance model validation for the two-point correlation function of galaxy clusters
Aims. We validate a semi-analytical model for the covariance of the real-space two-point correlation function of galaxy clusters.
Methods. Using 1000 PINOCCHIO light cones mimicking the expected Euclid sample of galaxy clusters, we calibrated a simple model to accurately describe the clustering covariance. Then, we used this model to quantify the likelihood-analysis response to variations in the covariance, and we investigated the impact of a cosmology-dependent matrix at the level of statistics expected for the Euclid survey of galaxy clusters.
Results. We find that a Gaussian model with Poissonian shot-noise does not correctly predict the covariance of the two-point correlation function of galaxy clusters. By introducing a few additional parameters fitted from simulations, the proposed model reproduces the numerical covariance with an accuracy of 10%, with differences of about 5% on the figure of merit of the cosmological parameters Ωm and σ8. We also find that the covariance contains additional valuable information that is not present in the mean value, and the constraining power of cluster clustering can improve significantly when its cosmology dependence is accounted for. Finally, we find that the cosmological figure of merit can be further improved when mass binning is taken into account. Our results have significant implications for the derivation of cosmological constraints from the two-point clustering statistics of the Euclid survey of galaxy clusters
- …
