130 research outputs found

    CosmoHub: Interactive exploration and distribution of astronomical data on Hadoop

    Get PDF
    We present CosmoHub (https://cosmohub.pic.es), a web application based on Hadoop to perform interactive exploration and distribution of massive cosmological datasets. Recent Cosmology seeks to unveil the nature of both dark matter and dark energy mapping the large-scale structure of the Universe, through the analysis of massive amounts of astronomical data, progressively increasing during the last (and future) decades with the digitization and automation of the experimental techniques. CosmoHub, hosted and developed at the Port d'Informació Científica (PIC), provides support to a worldwide community of scientists, without requiring the end user to know any Structured Query Language (SQL). It is serving data of several large international collaborations such as the Euclid space mission, the Dark Energy Survey (DES), the Physics of the Accelerating Universe Survey (PAUS) and the Marenostrum Institut de Ciències de l'Espai (MICE) numerical simulations. While originally developed as a PostgreSQL relational database web frontend, this work describes the current version of CosmoHub, built on top of Apache Hive, which facilitates scalable reading, writing and managing huge datasets. As CosmoHub's datasets are seldomly modified, Hive it is a better fit. Over 60 TiB of cataloged information and 50×10 astronomical objects can be interactively explored using an integrated visualization tool which includes 1D histogram and 2D heatmap plots. In our current implementation, online exploration of datasets of 10 objects can be done in a timescale of tens of seconds. Users can also download customized subsets of data in standard formats generated in few minutes.CosmoHub has been partially funded through projects of the Spanish national program “Programa Estatal de I + D + i” of the Spanish government. The support of the ERDF fund is gratefully acknowledged

    The Physics of the Accelerating Universe Survey: narrow-band image photometry

    Get PDF
    PAUCam is an innovative optical narrow-band imager mounted at the William Herschel Telescope built for the Physics of the Accelerating Universe Survey (PAUS). Its set of 40 filters results in images that are complex to calibrate, with specific instrumental signatures that cannot be processed with traditional data reduction techniques. In this paper, we present two pipelines developed by the PAUS data management team with the objective of producing science-ready catalogues from the uncalibrated raw images. The NIGHTLY pipeline takes care of entire image processing, with bespoke algorithms for photometric calibration and scatter-light correction. The Multi-Epoch and Multi-Band Analysis pipeline performs forced photometry over a reference catalogue to optimize the photometric redshift (photo-z) performance. We verify against spectroscopic observations that the current approach delivers an inter-band photometric calibration of 0.8 per cent across the 40 narrow-band set. The large volume of data produced every night and the rapid survey strategy feedback constraints require operating both pipelines in the Port d’Informació Cientifica data centre with intense parallelization. While alternative algorithms for further improvements in photo-z performance are under investigation, the image calibration and photometry presented in this work already enable state-of-the-art photo-z down to iAB = 23.0

    The PAU Survey: Narrow-band image photometry

    Full text link
    PAUCam is an innovative optical narrow-band imager mounted at the William Herschel Telescope built for the Physics of the Accelerating Universe Survey (PAUS). Its set of 40 filters results in images that are complex to calibrate, with specific instrumental signatures that cannot be processed with traditional data reduction techniques. In this paper we present two pipelines developed by the PAUS data management team with the objective of producing science-ready catalogues from the uncalibrated raw images. The Nightly pipeline takes care of all image processing, with bespoke algorithms for photometric calibration and scatter-light correction. The Multi-Epoch and Multi-Band Analysis (MEMBA) pipeline performs forced photometry over a reference catalogue to optimize the photometric redshift performance. We verify against spectroscopic observations that the current approach delivers an inter-band photometric calibration of 0.8% across the 40 narrow-band set. The large volume of data produced every night and the rapid survey strategy feedback constraints require operating both pipelines in the Port d'Informaci\'o Cientifica data centre with intense parallelization. While alternative algorithms for further improvements in photo-z performance are under investigation, the image calibration and photometry presented in this work already enable state-of-the-art photometric redshifts down to iAB=23.0.Comment: 32 pages, 26 figures, MNRAS in pres

    Euclid preparation: XII. Optimizing the photometric sample of the Euclid survey for galaxy clustering and galaxy-galaxy lensing analyses

    Get PDF
    Photometric redshifts (photo-zs) are one of the main ingredients in the analysis of cosmological probes. Their accuracy particularly affects the results of the analyses of galaxy clustering with photometrically selected galaxies (GCph) and weak lensing. In the next decade, space missions such as Euclid will collect precise and accurate photometric measurements for millions of galaxies. These data should be complemented with upcoming ground-based observations to derive precise and accurate photo-zs. In this article we explore how the tomographic redshift binning and depth of ground-based observations will affect the cosmological constraints expected from the Euclid mission. We focus on GCph and extend the study to include galaxy-galaxy lensing (GGL). We add a layer of complexity to the analysis by simulating several realistic photo-z distributions based on the Euclid Consortium Flagship simulation and using a machine learning photo-z algorithm. We then use the Fisher matrix formalism together with these galaxy samples to study the cosmological constraining power as a function of redshift binning, survey depth, and photo-z accuracy. We find that bins with an equal width in redshift provide a higher figure of merit (FoM) than equipopulated bins and that increasing the number of redshift bins from ten to 13 improves the FoM by 35% and 15% for GCph and its combination with GGL, respectively. For GCph, an increase in the survey depth provides a higher FoM. However, when we include faint galaxies beyond the limit of the spectroscopic training data, the resulting FoM decreases because of the spurious photo-zs. When combining GCph and GGL, the number density of the sample, which is set by the survey depth, is the main factor driving the variations in the FoM. Adding galaxies at faint magnitudes and high redshift increases the FoM, even when they are beyond the spectroscopic limit, since the number density increase compensates for the photo-z degradation in this case. We conclude that there is more information that can be extracted beyond the nominal ten tomographic redshift bins of Euclid and that we should be cautious when adding faint galaxies into our sample since they can degrade the cosmological constraints

    Euclid preparation: VIII. The Complete Calibration of the Colour–Redshift Relation survey: VLT/KMOS observations and data release

    Get PDF
    The Complete Calibration of the Colour–Redshift Relation survey (C3R2) is a spectroscopic effort involving ESO and Keck facilities designed specifically to empirically calibrate the galaxy colour–redshift relation – P(z|C) to the Euclid depth (iAB = 24.5) and is intimately linked to the success of upcoming Stage IV dark energy missions based on weak lensing cosmology. The aim is to build a spectroscopic calibration sample that is as representative as possible of the galaxies of the Euclid weak lensing sample. In order to minimise the number of spectroscopic observations necessary to fill the gaps in current knowledge of the P(z|C), self-organising map (SOM) representations of the galaxy colour space have been constructed. Here we present the first results of an ESO@VLT Large Programme approved in the context of C3R2, which makes use of the two VLT optical and near-infrared multi-object spectrographs, FORS2 and KMOS. This data release paper focuses on high-quality spectroscopic redshifts of high-redshift galaxies observed with the KMOS spectrograph in the near-infrared H- and K-bands. A total of 424 highly-reliable redshifts are measured in the 1.3 ≤ z ≤ 2.5 range, with total success rates of 60.7% in the H-band and 32.8% in the K-band. The newly determined redshifts fill 55% of high (mainly regions with no spectroscopic measurements) and 35% of lower (regions with low-resolution/low-quality spectroscopic measurements) priority empty SOM grid cells. We measured Hα fluxes in a 1. 002 radius aperture from the spectra of the spectroscopically confirmed galaxies and converted them into star formation rates. In addition, we performed an SED fitting analysis on the same sample in order to derive stellar masses, E(B − V), total magnitudes, and SFRs. We combine the results obtained from the spectra with those derived via SED fitting, and we show that the spectroscopic failures come from either weakly star-forming galaxies (at z 2 galaxies

    Euclid preparation: XII. Optimizing the photometric sample of the Euclid survey for galaxy clustering and galaxy-galaxy lensing analyses

    Get PDF
    Photometric redshifts (photo-zs) are one of the main ingredients in the analysis of cosmological probes. Their accuracy particularly affects the results of the analyses of galaxy clustering with photometrically selected galaxies (GCph) and weak lensing. In the next decade, space missions such as Euclid will collect precise and accurate photometric measurements for millions of galaxies. These data should be complemented with upcoming ground-based observations to derive precise and accurate photo-zs. In this article we explore how the tomographic redshift binning and depth of ground-based observations will affect the cosmological constraints expected from the Euclid mission. We focus on GCph and extend the study to include galaxy-galaxy lensing (GGL). We add a layer of complexity to the analysis by simulating several realistic photo-z distributions based on the Euclid Consortium Flagship simulation and using a machine learning photo-z algorithm. We then use the Fisher matrix formalism together with these galaxy samples to study the cosmological constraining power as a function of redshift binning, survey depth, and photo-z accuracy. We find that bins with an equal width in redshift provide a higher figure of merit (FoM) than equipopulated bins and that increasing the number of redshift bins from ten to 13 improves the FoM by 35% and 15% for GCph and its combination with GGL, respectively. For GCph, an increase in the survey depth provides a higher FoM. However, when we include faint galaxies beyond the limit of the spectroscopic training data, the resulting FoM decreases because of the spurious photo-zs. When combining GCph and GGL, the number density of the sample, which is set by the survey depth, is the main factor driving the variations in the FoM. Adding galaxies at faint magnitudes and high redshift increases the FoM, even when they are beyond the spectroscopic limit, since the number density increase compensates for the photo-z degradation in this case. We conclude that there is more information that can be extracted beyond the nominal ten tomographic redshift bins of Euclid and that we should be cautious when adding faint galaxies into our sample since they can degrade the cosmological constraints

    Euclid preparation: XVI. Exploring the ultra-low surface brightness Universe with Euclid /VIS

    Get PDF
    Context. While Euclid is an ESA mission specifically designed to investigate the nature of dark energy and dark matter, the planned unprecedented combination of survey area (∼15â 000 deg2), spatial resolution, low sky-background, and depth also make Euclid an excellent space observatory for the study of the low surface brightness Universe. Scientific exploitation of the extended low surface brightness structures requires dedicated calibration procedures that are yet to be tested. Aims. We investigate the capabilities of Euclid to detect extended low surface brightness structure by identifying and quantifying sky-background sources and stray-light contamination. We test the feasibility of generating sky flat-fields to reduce large-scale residual gradients in order to reveal the extended emission of galaxies observed in the Euclid survey. Methods. We simulated a realistic set of Euclid/VIS observations, taking into account both instrumental and astronomical sources of contamination, including cosmic rays, stray-light, zodiacal light, interstellar medium, and the cosmic infrared background, while simulating the effects of background sources in the field of view. Results. We demonstrate that a combination of calibration lamps, sky flats, and self-calibration would enable recovery of emission at a limiting surface brightness magnitude of μlim = 29.5-0.27+0.08 mag arcsec-2 (3σ, 10â ×â 10 arcsec2) in the Wide Survey, and it would reach regions deeper by 2 mag in the Deep Surveys. Conclusions.Euclid/VIS has the potential to be an excellent low surface brightness observatory. Covering the gap between pixel-To-pixel calibration lamp flats and self-calibration observations for large scales, the application of sky flat-fielding will enhance the sensitivity of the VIS detector at scales larger than 1″, up to the size of the field of view, enabling Euclid to detect extended surface brightness structures below μlimâ =â 31 mag arcsec-2 and beyond

    Euclid: Forecasts from redshift-space distortions and the Alcock-Paczynski test with cosmic voids

    Get PDF
    Euclid is poised to survey galaxies across a cosmological volume of unprecedented size, providing observations of more than a billion objects distributed over a third of the full sky. Approximately 20 million of these galaxies will have their spectroscopy available, allowing us to map the three-dimensional large-scale structure of the Universe in great detail. This paper investigates prospects for the detection of cosmic voids therein and the unique benefit they provide for cosmological studies. In particular, we study the imprints of dynamic (redshift-space) and geometric (Alcock-Paczynski) distortions of average void shapes and their constraining power on the growth of structure and cosmological distance ratios. To this end, we made use of the Flagship mock catalog, a state-of-the-art simulation of the data expected to be observed with Euclid. We arranged the data into four adjacent redshift bins, each of which contains about 11000 voids and we estimated the stacked void-galaxy cross-correlation function in every bin. Fitting a linear-theory model to the data, we obtained constraints on f/b and DMH, where f is the linear growth rate of density fluctuations, b the galaxy bias, D-M the comoving angular diameter distance, and H the Hubble rate. In addition, we marginalized over two nuisance parameters included in our model to account for unknown systematic effects in the analysis. With this approach, Euclid will be able to reach a relative precision of about 4% on measurements of f/b and 0.5% on DMH in each redshift bin. Better modeling or calibration of the nuisance parameters may further increase this precision to 1% and 0.4%, respectively. Our results show that the exploitation of cosmic voids in Euclid will provide competitive constraints on cosmology even as a stand-alone probe. For example, the equation-of-state parameter, w, for dark energy will be measured with a precision of about 10%, consistent with previous more approximate forecasts

    Euclid preparation TBD. The effect of baryons on the Halo Mass Function

    Full text link
    The Euclid photometric survey of galaxy clusters stands as a powerful cosmological tool, with the capacity to significantly propel our understanding of the Universe. Despite being sub-dominant to dark matter and dark energy, the baryonic component in our Universe holds substantial influence over the structure and mass of galaxy clusters. This paper presents a novel model to precisely quantify the impact of baryons on galaxy cluster virial halo masses, using the baryon fraction within a cluster as proxy for their effect. Constructed on the premise of quasi-adiabaticity, the model includes two parameters calibrated using non-radiative cosmological hydrodynamical simulations and a single large-scale simulation from the Magneticum set, which includes the physical processes driving galaxy formation. As a main result of our analysis, we demonstrate that this model delivers a remarkable one percent relative accuracy in determining the virial dark matter-only equivalent mass of galaxy clusters, starting from the corresponding total cluster mass and baryon fraction measured in hydrodynamical simulations. Furthermore, we demonstrate that this result is robust against changes in cosmological parameters and against varying the numerical implementation of the sub-resolution physical processes included in the simulations. Our work substantiates previous claims about the impact of baryons on cluster cosmology studies. In particular, we show how neglecting these effects would lead to biased cosmological constraints for a Euclid-like cluster abundance analysis. Importantly, we demonstrate that uncertainties associated with our model, arising from baryonic corrections to cluster masses, are sub-dominant when compared to the precision with which mass-observable relations will be calibrated using Euclid, as well as our current understanding of the baryon fraction within galaxy clusters.Comment: 18 pages, 10 figures, 4 tables, 1 appendix, abstract abridged for arXiv submissio

    Euclid Preparation. XXVIII. Forecasts for ten different higher-order weak lensing statistics

    Full text link
    Recent cosmic shear studies have shown that higher-order statistics (HOS) developed by independent teams now outperform standard two-point estimators in terms of statistical precision thanks to their sensitivity to the non-Gaussian features of large-scale structure. The aim of the Higher-Order Weak Lensing Statistics (HOWLS) project is to assess, compare, and combine the constraining power of ten different HOS on a common set of EuclidEuclid-like mocks, derived from N-body simulations. In this first paper of the HOWLS series, we computed the nontomographic (Ωm\Omega_{\rm m}, σ8\sigma_8) Fisher information for the one-point probability distribution function, peak counts, Minkowski functionals, Betti numbers, persistent homology Betti numbers and heatmap, and scattering transform coefficients, and we compare them to the shear and convergence two-point correlation functions in the absence of any systematic bias. We also include forecasts for three implementations of higher-order moments, but these cannot be robustly interpreted as the Gaussian likelihood assumption breaks down for these statistics. Taken individually, we find that each HOS outperforms the two-point statistics by a factor of around two in the precision of the forecasts with some variations across statistics and cosmological parameters. When combining all the HOS, this increases to a 4.54.5 times improvement, highlighting the immense potential of HOS for cosmic shear cosmological analyses with EuclidEuclid. The data used in this analysis are publicly released with the paper.Comment: 33 pages, 24 figures, main results in Fig. 19 & Table 5, version published in A&
    corecore