8,248 research outputs found

    Probing dark energy with future surveys

    Get PDF
    I review the observational prospects to constrain the equation of state parameter of dark energy and I discuss the potential of future imaging and redshift surveys. Bayesian model selection is used to address the question of the level of accuracy on the equation of state parameter that is required before explanations alternative to a cosmological constant become very implausible. I discuss results in the prediction space of dark energy models. If no significant departure from w=-1 is detected, a precision on w of order 1% will translate into strong evidence against fluid-like dark energy, while decisive evidence will require a precision of order 10^-3

    The cosmological constant and the paradigm of adiabaticity

    Full text link
    We discuss the value of the cosmological constant as recovered from CMB and LSS data and the robustness of the results when general isocurvature initial conditions are allowed for, as opposed to purely adiabatic perturbations. The Bayesian and frequentist statistical approaches are compared. It is shown that pre-WMAP CMB and LSS data tend to be incompatible with a non-zero cosmological constant, regardless of the type of initial conditions and of the statistical approach. The non-adiabatic contribution is constrained to be < 40% (2sigma c.l.).Comment: 9 pages, 5 figures, to appear in New Astronomy Reviews, Proceedings of the 2nd CMBNET Meeting, 20-21 February 2003, Oxford, U

    Applications of Bayesian model selection to cosmological parameters

    Get PDF
    Bayesian model selection is a tool to decide whether the introduction of a new parameter is warranted by data. I argue that the usual sampling statistic significance tests for a null hypothesis can be misleading, since they do not take into account the information gained through the data, when updating the prior distribution to the posterior. On the contrary, Bayesian model selection offers a quantitative implementation of Occam's razor. I introduce the Savage-Dickey density ratio, a computationally quick method to determine the Bayes factor of two nested models and hence perform model selection. As an illustration, I consider three key parameters for our understanding of the cosmological concordance model. By using WMAP 3-year data complemented by other cosmological measurements, I show that a non-scale invariant spectral index of perturbations is favoured for any sensible choice of prior. It is also found that a flat Universe is favoured with odds of 29:1 over non--flat models, and that there is strong evidence against a CDM isocurvature component to the initial conditions which is totally (anti)correlated with the adiabatic mode (odds of about 2000:1), but that this is strongly dependent on the prior adopted. These results are contrasted with the analysis of WMAP 1-year data, which were not informative enough to allow a conclusion as to the status of the spectral index. In a companion paper, a new technique to forecast the Bayes factor of a future observation is presented.Comment: v2 to v3: minor changes, matches accepted version by MNRAS. v1 to v2: major revision. New results using WMAP 3-yr data, scale-invariant spectrum now disfavoured with moderate evidence. New benchmark test for the accuracy of the method. Bayes factor forecast methodology (PPOD, formerly called ExPO) expanded and now presented in a companion paper (astro-ph/0703063

    Bayesian Calibrated Significance Levels Applied to the Spectral Tilt and Hemispherical Asymmetry

    Get PDF
    Bayesian model selection provides a formal method of determining the level of support for new parameters in a model. However, if there is not a specific enough underlying physical motivation for the new parameters it can be hard to assign them meaningful priors, an essential ingredient of Bayesian model selection. Here we look at methods maximizing the prior so as to work out what is the maximum support the data could give for the new parameters. If the maximum support is not high enough then one can confidently conclude that the new parameters are unnecessary without needing to worry that some other prior may make them significant. We discuss a computationally efficient means of doing this which involves mapping p-values onto upper bounds of the Bayes factor (or odds) for the new parameters. A p-value of 0.05 (1.96σ1.96\sigma) corresponds to odds less than or equal to 5:2 which is below the `weak' support at best threshold. A p-value of 0.0003 (3.6σ3.6\sigma) corresponds to odds of less than or equal to 150:1 which is the `strong' support at best threshold. Applying this method we find that the odds on the scalar spectral index being different from one are 49:1 at best. We also find that the odds that there is primordial hemispherical asymmetry in the cosmic microwave background are 9:1 at best.Comment: 5 pages. V2: clarifying comments added in response to referee report. Matches version to appear in MNRA

    Testing the paradigm of adiabaticity

    Full text link
    We introduce the concepts of adiabatic (curvature) and isocurvature (entropy) cosmological perturbations and present their relevance for parameter estimation from cosmic microwave background anisotropies data. We emphasize that, while present-day data are in excellent agreement with pure adiabaticity, subdominant isocurvature contributions cannot be ruled out. We discuss model independent constraints on the isocurvature contribution. Finally, we argue that the Planck satellite will be able to do precision cosmology even if the assumption of adiabaticity is relaxed.Comment: Proceedings of the 10th Marcel Grossmann Meeting, Rio de Janeiro, July 2003, 5 pages, 2 figure

    Statistical Challenges of Global SUSY Fits

    Get PDF
    We present recent results aiming at assessing the coverage properties of Bayesian and frequentist inference methods, as applied to the reconstruction of supersymmetric parameters from simulated LHC data. We discuss the statistical challenges of the reconstruction procedure, and highlight the algorithmic difficulties of obtaining accurate profile likelihood estimates

    Constraining the helium abundance with CMB data

    Get PDF
    We consider for the first time the ability of present-day cosmic microwave background (CMB) anisotropies data to determine the primordial helium mass fraction, Y_p. We find that CMB data alone gives the confidence interval 0.160 < Y_p < 0.501 (at 68% c.l.). We analyse the impact on the baryon abundance as measured by CMB and discuss the implications for big bang nucleosynthesis. We identify and discuss correlations between the helium mass fraction and both the redshift of reionization and the spectral index. We forecast the precision of future CMB observations, and find that Planck alone will measure Y_p with error-bars of 5%. We point out that the uncertainty in the determination of the helium fraction will have to be taken into account in order to correctly estimate the baryon density from Planck-quality CMB data

    Why anthropic reasoning cannot predict Lambda

    Full text link
    We revisit anthropic arguments purporting to explain the measured value of the cosmological constant. We argue that different ways of assigning probabilities to candidate universes lead to totally different anthropic predictions. As an explicit example, we show that weighting different universes by the total number of possible observations leads to an extremely small probability for observing a value of Lambda equal to or greater than what we now measure. We conclude that anthropic reasoning within the framework of probability as frequency is ill-defined and that in the absence of a fundamental motivation for selecting one weighting scheme over another the anthropic principle cannot be used to explain the value of Lambda, nor, likely, any other physical parameters.Comment: 4 pages, 1 figure. Discussion slighlty expanded, refs added, conclusions unchanged. Matches published versio

    The Virtues of Frugality - Why cosmological observers should release their data slowly

    Get PDF
    Cosmologists will soon be in a unique position. Observational noise will gradually be replaced by cosmic variance as the dominant source of uncertainty in an increasing number of observations. We reflect on the ramifications for the discovery and verification of new models. If there are features in the full data set that call for a new model, there will be no subsequent observations to test that model's predictions. We give specific examples of the problem by discussing the pitfalls of model discovery by prior adjustment in the context of dark energy models and inflationary theories. We show how the gradual release of data can mitigate this difficulty, allowing anomalies to be identified, and new models to be proposed and tested. We advocate that observers plan for the frugal release of data from future cosmic variance limited observations.Comment: 5 pages, expanded discussion of Lambda and of blind anlysis, added refs. Matches version to appear in MNRAS Letter
    corecore