16 research outputs found

    Large-Scale Structure with 21cm Intensity Mapping

    Get PDF
    We are witnessing exciting times in the field of cosmology. Current and future experiments and surveys will provide us with tight constraints on the key cosmological parameters. A new and promising technique of mapping the Large-Scale Structure (LSS) in our Universe is the 21cm Intensity Mapping (IM) in which one uses the emission of the neutral hydrogen as a tracer of the underlying matter field. In principle this technique can be used to map huge portions of our Universe and deliver 3D structure measurements providing us with the information that is complementary to the information extracted from the Cosmic Microwave Background (CMB) experiments. However, the field of 21cm IM cosmology is still in its raising and is severely limited by the foreground issues and problems. In this thesis we will consider several aspects of using the 21cm IM as an LSS probe in order to better constrain the cosmological parameters. First, we present and analyse a Baryon Acoustic Oscillation (BAO) reconstruction method that consists of displacing pixels instead of galaxies and whose implementation is easier than the standard reconstruction method. We show that this method is equivalent to the standard reconstruction technique in the limit where the number of pixels becomes very large. This method is particularly useful in surveys where individual galaxies are not resolved, as in 21cm IM observations. We validate this method by reconstructing mock pixelated maps, that we build from the distribution of matter and halos in real- and redshift-space, from a large set of numerical simulations. We find that this method is able to decrease the uncertainty in the BAO peak position by 30-50% over the typical angular resolution scales of 21cm IM experiments. Second, we investigate the possibility of performing cosmological studies in the redshift range 2.5 < z < 5 through suitable extensions of existing and upcoming radio-telescopes like CHIME, HIRAX and FAST. We use the Fisher matrix technique to forecast the bounds that those instruments can place on the growth rate, the BAO distance scale parameters, the sum of the neutrino masses and the number of relativistic degrees of freedom at decoupling, Neff. We point out that quantities that depend on the amplitude of the 21cm power spectrum, like f\u3c38, are completely degenerate with \u3a9HI and bHI. Then, we propose several strategies to independently constrain them through cross-correlations with other probes. We study in detail the dependence of our results on the instrument, amplitude of the HI bias, the foreground wedge coverage, the nonlinear scale used in the analysis, uncertainties in the theoretical modeling and the priors on bHI and \u3a9HI. We conclude that 21cm IM surveys operating in this redshift range can provide extremely competitive constraints on key cosmological parameters. Thridly, we have used TNG100, a large state-of-the-art magneto-hydrodynamic simulation of a 75 h 121 Mpc box size, which is part of the IllustrisTNG Project, to study the neutral hydrogen density profiles in dark matter halos. We find that while the density profiles of HI exhibit a large halo-to-halo scatter, the mean profiles are universal across mass and redshift. Finally, we combine information from the clustering of HI galaxies in the 100% data release of the Arecibo Legacy Fast ALFA survey (ALFALFA), and from the HI content of optically-selected galaxy groups found in the Sloan Digital Sky Survey (SDSS) to constrain the relation between halo mass Mh and its average total HI mass content MHI. We model the abundance and clustering of neutral hydrogen through a halo-model-based approach, parametrizing the MHI(Mh) relation as a power law with an exponential mass cutoff. To break the degeneracy between the amplitude and low-mass cutoff of the MHI(Mh) relation, we also include a recent measurement of the cosmic HI abundance from the 100% ALFALFA sample. We find that all datasets are consistent with a power-law index \u3b1 = 0.44\ub10.08 and a cutoff halo mass log10 Mmin /(h^ 121M 99) = 11.27+0.24 120.30. We compare these results with predictions from state-of-the-art magneto-hydrodynamical simulations, and find both to be in good qualitative agreement, although the data favours a significantly larger cutoff mass that is consistent with the higher cosmic HI abundance found in simulations. Both data and simulations seem to predict a similar value for the HI bias (bHI = 0.875 \ub1 0.022) and shot-noise power (PSN = 92+20-18 [h^ 121Mpc]^3) at redshift z = 0

    The HI content of dark matter halos at z0z\approx 0 from ALFALFA

    Full text link
    We combine information from the clustering of HI galaxies in the 100% data release of the Arecibo Legacy Fast ALFA survey (ALFALFA), and from the HI content of optically-selected galaxy groups found in the Sloan Digital Sky Survey (SDSS) to constrain the relation between halo mass MhM_h and its average total HI mass content MHIM_{\rm HI}. We model the abundance and clustering of neutral hydrogen through a halo-model-based approach, parametrizing the MHI(Mh)M_{\rm HI}(M_h) relation as a power law with an exponential mass cutoff. To break the degeneracy between the amplitude and low-mass cutoff of the MHI(Mh)M_{\rm HI}(M_h) relation, we also include a recent measurement of the cosmic HI abundance from the α\alpha.100 sample. We find that all datasets are consistent with a power-law index α=0.44±0.08\alpha=0.44\pm 0.08 and a cutoff halo mass log10Mmin/(h1M)=11.270.30+0.24\log_{10}M_{\rm min}/(h^{-1}M_\odot)=11.27^{+0.24}_{-0.30}. We compare these results with predictions from state-of-the-art magneto-hydrodynamical simulations, and find both to be in good qualitative agreement, although the data favours a significantly larger cutoff mass that is consistent with the higher cosmic HI abundance found in simulations. Both data and simulations seem to predict a similar value for the HI bias (bHI=0.875±0.022b_{\rm HI}=0.875\pm0.022) and shot-noise power (PSN=9218+20[h1Mpc]3P_{\rm SN}=92^{+20}_{-18}\,[h^{-1}{\rm Mpc}]^3) at redshift z=0z=0.Comment: 17 pages, 11 figures. Comments welcom

    Packed Ultra-wideband Mapping Array (PUMA): A Radio Telescope for Cosmology and Transients

    Full text link
    PUMA is a proposal for an ultra-wideband, low-resolution and transit interferometric radio telescope operating at 2001100MHz200-1100\,\mathrm{MHz}. Its design is driven by six science goals which span three science themes: the physics of dark energy (measuring the expansion history and growth of the universe up to z=6z=6), the physics of inflation (constraining primordial non-Gaussianity and primordial features) and the transient radio sky (detecting one million fast radio bursts and following up SKA-discovered pulsars). We propose two array configurations composed of hexagonally close-packed 6m dish arrangements with 50% fill factor. The initial 5,000 element 'petite array' is scientifically compelling, and can act as a demonstrator and a stepping stone to the full 32,000 element 'full array'. Viewed as a 21cm intensity mapping telescope, the program has the noise equivalent of a traditional spectroscopic galaxy survey comprised of 0.6 and 2.5 billion galaxies at a comoving wavenumber of k=0.5hMpc1k=0.5\,h\mathrm{Mpc}^{-1} spanning the redshift range z=0.36z = 0.3 - 6 for the petite and full configurations, respectively. At redshifts beyond z=2z=2, the 21cm technique is a uniquely powerful way of mapping the universe, while the low-redshift range will allow for numerous cross-correlations with existing and upcoming surveys. This program is enabled by the development of ultra-wideband radio feeds, cost-effective dish construction methods, commodity radio-frequency electronics driven by the telecommunication industry and the emergence of sufficient computing power to facilitate real-time signal processing that exploits the full potential of massive radio arrays. The project has an estimated construction cost of 55 and 330 million FY19 USD for the petite and full array configurations. Including R&D, design, operations and science analysis, the cost rises to 125 and 600 million FY19 USD, respectively.Comment: 10 pages + references, 3 figures, 3 tables; project white paper submitted to the Astro2020 decadal survey; further details in updated arXiv:1810.0957

    Packed Ultra-wideband Mapping Array (PUMA): Astro2020 RFI Response

    Full text link
    The Packed Ultra-wideband Mapping Array (PUMA) is a proposed low-resolution transit interferometric radio telescope operating over the frequency range 200 - 1100MHz. Its rich science portfolio will include measuring structure in the universe from redshift z = 0.3 to 6 using 21cm intensity mapping, detecting one million fast radio bursts, and monitoring thousands of pulsars. It will allow PUMA to advance science in three different areas of physics (the physics of dark energy, the physics of cosmic inflation and time-domain astrophysics). This document is a response to a request for information (RFI) by the Panel on Radio, Millimeter, and Submillimeter Observations from the Ground (RMS) of the Decadal Survey on Astronomy and Astrophysics 2020. We present the science case of PUMA, the development path and major risks to the project.Comment: 46 pages, 16 figures, 7 tables; response to the request for information (RFI) by the Panel on Radio, Millimeter, and Submillimeter Observations from the Ground (RMS) of the Astro2020 Decadal Survey regarding PUMA APC submission (arXiv:1907.12559); v2: updated with correct bbl fil

    High-redshift post-reionization cosmology with 21cm intensity mapping

    Get PDF
    We investigate the possibility of performing cosmological studies in the redshift range 2.5<z<5 through suitable extensions of existing and upcoming radio-telescopes like CHIME, HIRAX and FAST. We use the Fisher matrix technique to forecast the bounds that those instruments can place on the growth rate, the BAO distance scale parameters, the sum of the neutrino masses and the number of relativistic degrees of freedom at decoupling, Neff. We point out that quantities that depend on the amplitude of the 21cm power spectrum, like f\u3c38, are completely degenerate with \u3a9HI and bHI, and propose several strategies to independently constrain them through cross-correlations with other probes. Assuming 5% priors on \u3a9HI and bHI, kmax=0.2 h Mpc-1 and the primary beam wedge, we find that a HIRAX extension can constrain, within bins of \u394 z=0.1: 1) the value of f\u3c38 at 4%, 2) the value of DA and H at 1%. In combination with data from Euclid-like galaxy surveys and CMB S4, the sum of the neutrino masses can be constrained with an error equal to 23 meV (1\u3c3), while Neff can be constrained within 0.02 (1\u3c3). We derive similar constraints for the extensions of the other instruments. We study in detail the dependence of our results on the instrument, amplitude of the HI bias, the foreground wedge coverage, the nonlinear scale used in the analysis, uncertainties in the theoretical modeling and the priors on bHI and \u3a9HI. We conclude that 21cm intensity mapping surveys operating in this redshift range can provide extremely competitive constraints on key cosmological parameters

    The quijote simulations

    Get PDF
    The Quijote simulations are a set of 44,100 full N-body simulations spanning more than 7000 cosmological models in the hyperplane. At a single redshift, the simulations contain more than 8.5 trillion particles over a combined volume of 44,100 each simulation follows the evolution of 2563, 5123, or 10243 particles in a box of 1 h -1 Gpc length. Billions of dark matter halos and cosmic voids have been identified in the simulations, whose runs required more than 35 million core hours. The Quijote simulations have been designed for two main purposes: (1) to quantify the information content on cosmological observables and (2) to provide enough data to train machine-learning algorithms. In this paper, we describe the simulations and show a few of their applications. We also release the petabyte of data generated, comprising hundreds of thousands of simulation snapshots at multiple redshifts; halo and void catalogs; and millions of summary statistics, such as power spectra, bispectra, correlation functions, marked power spectra, and estimated probability density functions

    Modeling HI at the field level

    No full text
    We use an analytical forward model based on perturbation theory to predict the neutral hydrogen (HI) overdensity maps at low redshifts. We investigate its performance by comparing it directly at the field level to the simulated HI from the IllustrisTNG magneto-hydrodynamical simulation TNG300-1 (L=205  h-1 Mpc), in both real and redshift space. We demonstrate that HI is a biased tracer of the underlying matter field and find that the cubic bias model describes the simulated HI power spectrum to within 1% up to k=0.4(0.3)  h Mpc-1 in real (redshift) space at redshifts z=0, 1. Looking at counts in cells, we find an excellent agreement between the theory and simulations for cells as small as 5  h-1 Mpc. These results are in line with expectations from perturbation theory, and they imply that a perturbative description of the HI field is sufficiently accurate given the characteristics of upcoming 21 cm intensity mapping surveys. Additionally, we study the statistical properties of the model error—the difference between the truth and the model. We show that on large scales this error is nearly Gaussian and that it has a flat power spectrum, with amplitude significantly lower than the standard noise inferred from the HI power spectrum. We explain the origin of this discrepancy, discuss its implications for the HI power spectrum Fisher matrix forecasts, and argue that it motivates the HI field-level cosmological inference. On small scales in redshift space, we use the difference between the model and the truth as a proxy for the Fingers-of-God effect. This allows us to estimate the nonlinear velocity dispersion of HI and show that it is smaller than for the typical spectroscopic galaxy samples at the same redshift. Finally, we provide a simple prescription based on the perturbative forward model which can be used to efficiently generate accurate HI mock data, in real and redshift space.We use an analytical forward model based on perturbation theory to predict the neutral hydrogen (HI) overdensity maps at low redshifts. We investigate its performance by comparing it directly at the field level to the simulated HI from the IllustrisTNG simulation TNG300-1 (L=205 h1L=205\ h^{-1} Mpc), in both real and redshift space. We demonstrate that HI is a biased tracer of the underlying matter field and find that the cubic bias model describes the simulated HI power spectrum to within 1% up to k=0.4  (0.3)hMpc1k=0.4 \;(0.3) \,h\,{\rm Mpc}^{-1} in real (redshift) space at redshifts z=0,1z=0,1. Looking at counts in cells, we find an excellent agreement between the theory and simulations for cells as small as 5 h1h^{-1} Mpc. These results are in line with expectations from perturbation theory and they imply that a perturbative description of the HI field is sufficiently accurate given the characteristics of upcoming 21cm intensity mapping surveys. Additionally, we study the statistical properties of the model error - the difference between the truth and the model. We show that on large scales this error is nearly Gaussian and that it has a flat power spectrum, with amplitude significantly lower than the standard noise inferred from the HI power spectrum. We explain the origin of this discrepancy, discuss its implications for the HI power spectrum Fisher matrix forecasts and argue that it motivates the HI field-level cosmological inference. On small scales in redshift space we use the difference between the model and the truth as a proxy for the Fingers-of-God effect. This allows us to estimate the nonlinear velocity dispersion of HI and show that it is smaller than for the typical spectroscopic galaxy samples at the same redshift. Finally, we provide a simple prescription based on the perturbative forward model which can be used to efficiently generate accurate HI mock data, in real and redshift space

    Modeling HI at the field level

    Full text link
    corecore