22 research outputs found

    Finding Evidence for Massive Neutrinos using 3D Weak Lensing

    Full text link
    In this paper we investigate the potential of 3D cosmic shear to constrain massive neutrino parameters. We find that if the total mass is substantial (near the upper limits from LSS, but setting aside the Ly alpha limit for now), then 3D cosmic shear + Planck is very sensitive to neutrino mass and one may expect that a next generation photometric redshift survey could constrain the number of neutrinos N_nu and the sum of their masses m_nu to an accuracy of dN_nu ~ 0.08 and dm_nu ~ 0.03 eV respectively. If in fact the masses are close to zero, then the errors weaken to dN_nu ~ 0.10 and dm_nu~0.07 eV. In either case there is a factor 4 improvement over Planck alone. We use a Bayesian evidence method to predict joint expected evidence for N_nu and m_nu. We find that 3D cosmic shear combined with a Planck prior could provide `substantial' evidence for massive neutrinos and be able to distinguish `decisively' between many competing massive neutrino models. This technique should `decisively' distinguish between models in which there are no massive neutrinos and models in which there are massive neutrinos with |N_nu-3| > 0.35 and m_nu > 0.25 eV. We introduce the notion of marginalised and conditional evidence when considering evidence for individual parameter values within a multi-parameter model.Comment: 9 pages, 2 Figures, 2 Tables, submitted to Physical Review

    On model selection forecasting, Dark Energy and modified gravity

    Get PDF
    The Fisher matrix approach (Fisher 1935) allows one to calculate in advance how well a given experiment will be able to estimate model parameters, and has been an invaluable tool in experimental design. In the same spirit, we present here a method to predict how well a given experiment can distinguish between different models, regardless of their parameters. From a Bayesian viewpoint, this involves computation of the Bayesian evidence. In this paper, we generalise the Fisher matrix approach from the context of parameter fitting to that of model testing, and show how the expected evidence can be computed under the same simplifying assumption of a gaussian likelihood as the Fisher matrix approach for parameter estimation. With this `Laplace approximation' all that is needed to compute the expected evidence is the Fisher matrix itself. We illustrate the method with a study of how well upcoming and planned experiments should perform at distinguishing between Dark Energy models and modified gravity theories. In particular we consider the combination of 3D weak lensing, for which planned and proposed wide-field multi-band imaging surveys will provide suitable data, and probes of the expansion history of the Universe, such as proposed supernova and baryonic acoustic oscillations surveys. We find that proposed large-scale weak lensing surveys from space should be able readily to distinguish General Relativity from modified gravity models.Comment: 6 pages, 2 figure

    Observational signatures of Jordan-Brans-Dicke theories of gravity

    Full text link
    We analyze the Jordan-Brans-Dicke model (JBD) of gravity, where deviations from General Relativity (GR) are described by a scalar field non-minimally coupled to gravity. The theory is characterized by a constant coupling parameter, ωJBD\omega_{\rm JBD}; GR is recovered in the limit ωJBD\omega_{\rm JBD} \to \infty. In such theories, gravity modifications manifest at early times, so that one cannot rely on the usual approach of looking for inconsistencies in the expansion history and perturbations growth in order to discriminate between JBD and GR. However, we show that a similar technique can be successfully applied to early and late times observables instead. Cosmological parameters inferred extrapolating early-time observations to the present will match those recovered from direct late-time observations only if the correct gravity theory is used. We use the primary CMB, as will be seen by the Planck satellite, as the early-time observable; and forthcoming and planned Supernov{\ae}, Baryonic Acoustic Oscillations and Weak Lensing experiments as late-time observables. We find that detection of values of ωJBD\omega_{\rm JBD} as large as 500 and 1000 is within reach of the upcoming (2010) and next-generation (2020) experiments, respectively.Comment: minor revision, references added, matching version published in JCA

    Measuring the dark side (with weak lensing)

    Full text link
    We introduce a convenient parametrization of dark energy models that is general enough to include several modified gravity models and generalized forms of dark energy. In particular we take into account the linear perturbation growth factor, the anisotropic stress and the modified Poisson equation. We discuss the sensitivity of large scale weak lensing surveys like the proposed DUNE satellite to these parameters. We find that a large-scale weak-lensing tomographic survey is able to easily distinguish the Dvali-Gabadadze-Porrati model from LCDM and to determine the perturbation growth index to an absolute error of 0.02-0.03.Comment: 19 pages, 11 figure

    Comparison of Standard Ruler and Standard Candle constraints on Dark Energy Models

    Full text link
    We compare the dark energy model constraints obtained by using recent standard ruler data (Baryon Acoustic Oscillations (BAO) at z=0.2 and z=0.35 and Cosmic Microwave Background (CMB) shift parameters R and l_a) with the corresponding constraints obtained by using recent Type Ia Supernovae (SnIa) standard candle data (ESSENCE+SNLS+HST from Davis et. al.). We find that, even though both classes of data are consistent with LCDM at the 2\sigma level, there is a systematic difference between the two classes of data. In particular, we find that for practically all values of the parameters (\Omega_0m,\Omega_b) in the 2\sigma range of the the 3-year WMAP data (WMAP3) best fit, LCDM is significantly more consistent with the SnIa data than with the CMB+BAO data. For example for (\Omega_0m,\Omega_b)=(0.24,0.042) corresponding to the best fit values of WMAP3, the dark energy equation of state parametrization w(z)=w_0 + w_1 (z/(1+z)) best fit is at a 0.5\sigma distance from LCDM (w_0=-1,w_1=0) using the SnIa data and 1.7\sigma away from LCDM using the CMB+BAO data. There is a similar trend in the earlier data (SNLS vs CMB+BAO at z=0.35). This trend is such that the standard ruler CMB+BAO data show a mild preference for crossing of the phantom divide line w=-1, while the recent SnIa data favor LCDM. Despite of this mild difference in trends, we find no statistically significant evidence for violation of the cosmic distance duality relation \eta \equiv d_L(z)/(d_A(z) (1+z)^2)=1. For example, using a prior of \Omega_0m=0.24, we find \eta=0.95 \pm 0.025 in the redshift range 0<z<2, which is consistent with distance duality at the 2\sigma level.Comment: References added. 9 pages, 7 figures. The Mathematica files with the numerical analysis of the paper can be found at http://leandros.physics.uoi.gr/rulcand/rulcand.ht

    The growth of matter perturbations in some scalar-tensor DE models

    Full text link
    We consider asymptotically stable scalar-tensor dark energy (DE) models for which the equation of state parameter wDEw_{DE} tends to zero in the past. The viable models are of the phantom type today, however this phantomness is milder than in General Relativity if we take into account the varying gravitational constant when dealing with the SNIa data. We study further the growth of matter perturbations and we find a scaling behaviour on large redshifts which could provide an important constraint. In particular the growth of matter perturbations on large redshifts in our scalar-tensor models is close to the standard behaviour δma\delta_m \propto a, while it is substantially different for the best-fit model in General Relativity for the same parametrization of the background expansion. As for the growth of matter perturbations on small redshifts, we show that in these models the parameter γ0γ(z=0)\gamma'_0\equiv \gamma'(z=0) can take absolute values much larger than in models inside General Relativity. Assuming a constant γ\gamma when γ0\gamma'_0 is large would lead to a poor fit of the growth function ff. This provides another characteristic discriminative signature for these models.Comment: 13 pages, 7 figures, matches version published in JCA

    Planck 2015 results. XIV. Dark energy and modified gravity

    Get PDF
    We study the implications of Planck data for models of dark energy (DE) and modified gravity (MG), beyond the cosmological constant scenario. We start with cases where the DE only directly affects the background evolution, considering Taylor expansions of the equation of state, principal component analysis and parameterizations related to the potential of a minimally coupled DE scalar field. When estimating the density of DE at early times, we significantly improve present constraints. We then move to general parameterizations of the DE or MG perturbations that encompass both effective field theories and the phenomenology of gravitational potentials in MG models. Lastly, we test a range of specific models, such as k-essence, f(R) theories and coupled DE. In addition to the latest Planck data, for our main analyses we use baryonic acoustic oscillations, type-Ia supernovae and local measurements of the Hubble constant. We further show the impact of measurements of the cosmological perturbations, such as redshift-space distortions and weak gravitational lensing. These additional probes are important tools for testing MG models and for breaking degeneracies that are still present in the combination of Planck and background data sets. All results that include only background parameterizations are in agreement with LCDM. When testing models that also change perturbations (even when the background is fixed to LCDM), some tensions appear in a few scenarios: the maximum one found is \sim 2 sigma for Planck TT+lowP when parameterizing observables related to the gravitational potentials with a chosen time dependence; the tension increases to at most 3 sigma when external data sets are included. It however disappears when including CMB lensing

    Cosmology and fundamental physics with the Euclid satellite

    No full text
    Euclid is a European Space Agency medium class mission selected for launch in 2019 within the Cosmic Vision 2015-2025 programme. The main goal of Euclid is to understand the origin of the accelerated expansion of the Universe. Euclid will explore the expansion history of the Universe and the evolution of cosmic structures by measuring shapes and redshifts of galaxies as well as the distribution of clusters of galaxies over a large fraction of the sky. Although the main driver for Euclid is the nature of dark energy, Euclid science covers a vast range of topics, from cosmology to galaxy evolution to planetary research. In this review we focus on cosmology and fundamental physics, with a strong emphasis on science beyond the current standard models. We discuss five broad topics: dark energy and modified gravity, dark matter, initial conditions, basic assumptions and questions of methodology in the data analysis. This review has been planned and carried out within Euclid's Theory Working Group and is meant to provide a guide to the scientific themes that will underlie the activity of the group during the preparation of the Euclid mission
    corecore