98 research outputs found
Going beyond the Kaiser redshift-space distortion formula: a full general relativistic account of the effects and their detectability in galaxy clustering
Kaiser redshift-space distortion formula describes well the clustering of
galaxies in redshift surveys on small scales, but there are numerous additional
terms that arise on large scales. Some of these terms can be described using
Newtonian dynamics and have been discussed in the literature, while the others
require proper general relativistic description that was only recently
developed. Accounting for these terms in galaxy clustering is the first step
toward tests of general relativity on horizon scales. The effects can be
classified as two terms that represent the velocity and the gravitational
potential contributions. Their amplitude is determined by effects such as the
volume and luminosity distance fluctuation effects and the time evolution of
galaxy number density and Hubble parameter. We compare the Newtonian
approximation often used in the redshift-space distortion literature to the
fully general relativistic equation, and show that Newtonian approximation
accounts for most of the terms contributing to velocity effect. We perform a
Fisher matrix analysis of detectability of these terms and show that in a
single tracer survey they are completely undetectable. To detect these terms
one must resort to the recently developed methods to reduce sampling variance
and shot noise. We show that in an all-sky galaxy redshift survey at low
redshift the velocity term can be measured at a few sigma if one can utilize
halos of mass M>10^12 Msun (this can increase to 10-sigma or more in some more
optimistic scenarios), while the gravitational potential term itself can only
be marginally detected. We also demonstrate that the general relativistic
effect is not degenerate with the primordial non-Gaussian signature in galaxy
bias, and the ability to detect primordial non-Gaussianity is little
compromised.Comment: 13 pages, 5 figures, published in PR
Euclid : Forecasts from redshift-space distortions and the Alcock-Paczynski test with cosmic voids
Euclid is poised to survey galaxies across a cosmological volume of unprecedented size, providing observations of more than a billion objects distributed over a third of the full sky. Approximately 20 million of these galaxies will have their spectroscopy available, allowing us to map the three-dimensional large-scale structure of the Universe in great detail. This paper investigates prospects for the detection of cosmic voids therein and the unique benefit they provide for cosmological studies. In particular, we study the imprints of dynamic (redshift-space) and geometric (Alcock-Paczynski) distortions of average void shapes and their constraining power on the growth of structure and cosmological distance ratios. To this end, we made use of the Flagship mock catalog, a state-of-the-art simulation of the data expected to be observed with Euclid. We arranged the data into four adjacent redshift bins, each of which contains about 11000 voids and we estimated the stacked void-galaxy cross-correlation function in every bin. Fitting a linear-theory model to the data, we obtained constraints on f/b and DMH, where f is the linear growth rate of density fluctuations, b the galaxy bias, D-M the comoving angular diameter distance, and H the Hubble rate. In addition, we marginalized over two nuisance parameters included in our model to account for unknown systematic effects in the analysis. With this approach, Euclid will be able to reach a relative precision of about 4% on measurements of f/b and 0.5% on DMH in each redshift bin. Better modeling or calibration of the nuisance parameters may further increase this precision to 1% and 0.4%, respectively. Our results show that the exploitation of cosmic voids in Euclid will provide competitive constraints on cosmology even as a stand-alone probe. For example, the equation-of-state parameter, w, for dark energy will be measured with a precision of about 10%, consistent with previous more approximate forecasts.Peer reviewe
Minimizing the stochasticity of halos in large-scale structure surveys
In recent work (Seljak, Hamaus and Desjacques 2009) it was found that
weighting central halo galaxies by halo mass can significantly suppress their
stochasticity relative to the dark matter, well below the Poisson model
expectation. In this paper we extend this study with the goal of finding the
optimal mass-dependent halo weighting and use -body simulations to perform a
general analysis of halo stochasticity and its dependence on halo mass. We
investigate the stochasticity matrix, defined as , where is the dark matter
overdensity in Fourier space, the halo overdensity of the -th
halo mass bin and the halo bias. In contrast to the Poisson model
predictions we detect nonvanishing correlations between different mass bins. We
also find the diagonal terms to be sub-Poissonian for the highest-mass halos.
The diagonalization of this matrix results in one large and one low eigenvalue,
with the remaining eigenvalues close to the Poisson prediction ,
where is the mean halo number density. The eigenmode with the lowest
eigenvalue contains most of the information and the corresponding eigenvector
provides an optimal weighting function to minimize the stochasticity between
halos and dark matter. We find this optimal weighting function to match linear
mass weighting at high masses, while at the low-mass end the weights approach a
constant whose value depends on the low-mass cut in the halo mass function.
Finally, we employ the halo model to derive the stochasticity matrix and the
scale-dependent bias from an analytical perspective. It is remarkably
successful in reproducing our numerical results and predicts that the
stochasticity between halos and the dark matter can be reduced further when
going to halo masses lower than we can resolve in current simulations.Comment: 17 pages, 14 figures, matched the published version in Phys. Rev. D
including one new figur
Optimal Constraints on Local Primordial Non-Gaussianity from the Two-Point Statistics of Large-Scale Structure
One of the main signatures of primordial non-Gaussianity of the local type is
a scale-dependent correction to the bias of large-scale structure tracers such
as galaxies or clusters, whose amplitude depends on the bias of the tracers
itself. The dominant source of noise in the power spectrum of the tracers is
caused by sampling variance on large scales (where the non-Gaussian signal is
strongest) and shot noise arising from their discrete nature. Recent work has
argued that one can avoid sampling variance by comparing multiple tracers of
different bias, and suppress shot noise by optimally weighting halos of
different mass. Here we combine these ideas and investigate how well the
signatures of non-Gaussian fluctuations in the primordial potential can be
extracted from the two-point correlations of halos and dark matter. On the
basis of large -body simulations with local non-Gaussian initial conditions
and their halo catalogs we perform a Fisher matrix analysis of the two-point
statistics. Compared to the standard analysis, optimal weighting- and
multiple-tracer techniques applied to halos can yield up to one order of
magnitude improvements in \fnl-constraints, even if the underlying dark
matter density field is not known. We compare our numerical results to the halo
model and find satisfactory agreement. Forecasting the optimal
\fnl-constraints that can be achieved with our methods when applied to
existing and future survey data, we find that a survey of
volume resolving all halos down to 10^{11}\hMsun at
will be able to obtain \sigma_{\fnl}\sim1 (68% cl), a factor of
improvement over the current limits. Decreasing the minimum mass of
resolved halos, increasing the survey volume or obtaining the dark matter maps
can further improve these limits, potentially reaching the level of
\sigma_{\fnl}\sim0.1. (abridged)Comment: V1: 23 pages, 12 figures, submitted to PRD. V2: 24 pages, added
appendix and citations, matched to PRD published versio
The GIGANTES dataset: precision cosmology from voids in the machine learning era
We present GIGANTES, the most extensive and realistic void catalog suite ever
released -- containing over 1 billion cosmic voids covering a volume larger
than the observable Universe, more than 20 TB of data, and created by running
the void finder VIDE on QUIJOTE's halo simulations. The expansive and detailed
GIGANTES suite, spanning thousands of cosmological models, opens up the study
of voids, answering compelling questions: Do voids carry unique cosmological
information? How is this information correlated with galaxy information?
Leveraging the large number of voids in the GIGANTES suite, our Fisher
constraints demonstrate voids contain additional information, critically
tightening constraints on cosmological parameters. We use traditional void
summary statistics (void size function, void density profile) and the void
auto-correlation function, which independently yields an error of
on for a 1
simulation, without CMB priors. Combining halos and voids we forecast an error
of from the same volume. Extrapolating to next generation
multi-Gpc surveys such as DESI, Euclid, SPHEREx, and the Roman Space
Telescope, we expect voids should yield an independent determination of
neutrino mass. Crucially, GIGANTES is the first void catalog suite expressly
built for intensive machine learning exploration. We illustrate this by
training a neural network to perform likelihood-free inference on the void size
function. Cosmology problems provide an impetus to develop novel deep learning
techniques, leveraging the symmetries embedded throughout the universe from
physical laws, interpreting models, and accurately predicting errors. With
GIGANTES, machine learning gains an impressive dataset, offering unique
problems that will stimulate new techniques.Comment: references added, typos corrected, version submitted to Ap
Dark Energy Survey year 1 results: the relationship between mass and light around cosmic voids
What are the mass and galaxy profiles of cosmic voids? In this paper, we use two methods to extract voids in the Dark Energy Survey (DES) Year 1 redMaGiC galaxy sample to address this question. We use either 2D slices in projection, or the 3D distribution of galaxies based on photometric redshifts to identify voids. For the mass profile, we measure the tangential shear profiles of background galaxies to infer the excess surface mass density. The signal-to-noise ratio for our lensing measurement ranges between 10.7 and 14.0 for the two void samples. We infer their 3D density profiles by fitting models based on N-body simulations and find good agreement for void radii in the range 15-85 Mpc. Comparison with their galaxy profiles then allows us to test the relation between mass and light at the 10 per cent level, the most stringent test to date. We find very similar shapes for the two profiles, consistent with a linear relationship between mass and light both within and outside the void radius. We validate our analysis with the help of simulated mock catalogues and estimate the impact of photometric redshift uncertainties on the measurement. Our methodology can be used for cosmological applications, including tests of gravity with voids. This is especially promising when the lensing profiles are combined with spectroscopic measurements of void dynamics via redshift-space distortions
GRAVITY: getting to the event horizon of Sgr A*
We present the second-generation VLTI instrument GRAVITY, which currently is
in the preliminary design phase. GRAVITY is specifically designed to observe
highly relativistic motions of matter close to the event horizon of Sgr A*, the
massive black hole at center of the Milky Way. We have identified the key
design features needed to achieve this goal and present the resulting
instrument concept. It includes an integrated optics, 4-telescope, dual feed
beam combiner operated in a cryogenic vessel; near infrared wavefront sensing
adaptive optics; fringe tracking on secondary sources within the field of view
of the VLTI and a novel metrology concept. Simulations show that the planned
design matches the scientific needs; in particular that 10 microarcsecond
astrometry is feasible for a source with a magnitude of K=15 like Sgr A*, given
the availability of suitable phase reference sources.Comment: 13 pages, 11 figures, to appear in the conference proceedings of SPIE
Astronomical Instrumentation, 23-28 June 2008, Marseille, Franc
- …