2,539 research outputs found

    Why bayesian “evidence for H1” in one condition and bayesian “evidence for H0” in another condition does not mean good-enough bayesian evidence for a difference between the conditions

    Get PDF
    Psychologists are often interested in whether an independent variable has a different effect in condition A than in condition B. To test such a question, one needs to directly compare the effect of that variable in the two conditions (i.e., test the interaction). Yet many researchers tend to stop when they find a significant test in one condition and a nonsignificant test in the other condition, deeming this as sufficient evidence for a difference between the two conditions. In this Tutorial, we aim to raise awareness of this inferential mistake when Bayes factors are used with conventional cutoffs to draw conclusions. For instance, some researchers might falsely conclude that there must be good-enough evidence for the interaction if they find good-enough Bayesian evidence for the alternative hypothesis, H1, in condition A and good-enough Bayesian evidence for the null hypothesis, H0, in condition B. The case study we introduce highlights that ignoring the test of the interaction can lead to unjustified conclusions and demonstrates that the principle that any assertion about the existence of an interaction necessitates the direct comparison of the conditions is as true for Bayesian as it is for frequentist statistics. We provide an R script of the analyses of the case study and a Shiny app that can be used with a 2 × 2 design to develop intuitions on this issue, and we introduce a rule of thumb with which one can estimate the sample size one might need to have a well-powered design

    Bayesian Analysis of Inflation II: Model Selection and Constraints on Reheating

    Full text link
    We discuss the model selection problem for inflationary cosmology. We couple ModeCode, a publicly-available numerical solver for the primordial perturbation spectra, to the nested sampler MultiNest, in order to efficiently compute Bayesian evidence. Particular attention is paid to the specification of physically realistic priors, including the parametrization of the post-inflationary expansion and associated thermalization scale. It is confirmed that while present-day data tightly constrains the properties of the power spectrum, it cannot usefully distinguish between the members of a large class of simple inflationary models. We also compute evidence using a simulated Planck likelihood, showing that while Planck will have more power than WMAP to discriminate between inflationary models, it will not definitively address the inflationary model selection problem on its own. However, Planck will place very tight constraints on any model with more than one observationally-distinct inflationary regime -- e.g. the large- and small-field limits of the hilltop inflation model -- and put useful limits on different reheating scenarios for a given model.Comment: ModeCode package available from http://zuserver2.star.ucl.ac.uk/~hiranya/ModeCode/ModeCode (requires CosmoMC and MultiNest); to be published in PRD. Typos fixe

    Constructing smooth potentials of mean force, radial, distribution functions and probability densities from sampled data

    Full text link
    In this paper a method of obtaining smooth analytical estimates of probability densities, radial distribution functions and potentials of mean force from sampled data in a statistically controlled fashion is presented. The approach is general and can be applied to any density of a single random variable. The method outlined here avoids the use of histograms, which require the specification of a physical parameter (bin size) and tend to give noisy results. The technique is an extension of the Berg-Harris method [B.A. Berg and R.C. Harris, Comp. Phys. Comm. 179, 443 (2008)], which is typically inaccurate for radial distribution functions and potentials of mean force due to a non-uniform Jacobian factor. In addition, the standard method often requires a large number of Fourier modes to represent radial distribution functions, which tends to lead to oscillatory fits. It is shown that the issues of poor sampling due to a Jacobian factor can be resolved using a biased resampling scheme, while the requirement of a large number of Fourier modes is mitigated through an automated piecewise construction approach. The method is demonstrated by analyzing the radial distribution functions in an energy-discretized water model. In addition, the fitting procedure is illustrated on three more applications for which the original Berg-Harris method is not suitable, namely, a random variable with a discontinuous probability density, a density with long tails, and the distribution of the first arrival times of a diffusing particle to a sphere, which has both long tails and short-time structure. In all cases, the resampled, piecewise analytical fit outperforms the histogram and the original Berg-Harris method.Comment: 14 pages, 15 figures. To appear in J. Chem. Phy

    Determining the Neutrino Mass Hierarchy with Cosmology

    Full text link
    The combination of current large scale structure and cosmic microwave background (CMB) anisotropies data can place strong constraints on the sum of the neutrino masses. Here we show that future cosmic shear experiments, in combination with CMB constraints, can provide the statistical accuracy required to answer questions about differences in the mass of individual neutrino species. Allowing for the possibility that masses are non-degenerate we combine Fisher matrix forecasts for a weak lensing survey like Euclid with those for the forthcoming Planck experiment. Under the assumption that neutrino mass splitting is described by a normal hierarchy we find that the combination Planck and Euclid will possibly reach enough sensitivity to put a constraint on the mass of a single species. Using a Bayesian evidence calculation we find that such future experiments could provide strong evidence for either a normal or an inverted neutrino hierachy. Finally we show that if a particular neutrino hierachy is assumed then this could bias cosmological parameter constraints, for example the dark energy equation of state parameter, by > 1\sigma, and the sum of masses by 2.3\sigma.Comment: 9 pages, 6 figures, 3 table

    Tests of Bayesian Model Selection Techniques for Gravitational Wave Astronomy

    Full text link
    The analysis of gravitational wave data involves many model selection problems. The most important example is the detection problem of selecting between the data being consistent with instrument noise alone, or instrument noise and a gravitational wave signal. The analysis of data from ground based gravitational wave detectors is mostly conducted using classical statistics, and methods such as the Neyman-Pearson criteria are used for model selection. Future space based detectors, such as the \emph{Laser Interferometer Space Antenna} (LISA), are expected to produced rich data streams containing the signals from many millions of sources. Determining the number of sources that are resolvable, and the most appropriate description of each source poses a challenging model selection problem that may best be addressed in a Bayesian framework. An important class of LISA sources are the millions of low-mass binary systems within our own galaxy, tens of thousands of which will be detectable. Not only are the number of sources unknown, but so are the number of parameters required to model the waveforms. For example, a significant subset of the resolvable galactic binaries will exhibit orbital frequency evolution, while a smaller number will have measurable eccentricity. In the Bayesian approach to model selection one needs to compute the Bayes factor between competing models. Here we explore various methods for computing Bayes factors in the context of determining which galactic binaries have measurable frequency evolution. The methods explored include a Reverse Jump Markov Chain Monte Carlo (RJMCMC) algorithm, Savage-Dickie density ratios, the Schwarz-Bayes Information Criterion (BIC), and the Laplace approximation to the model evidence. We find good agreement between all of the approaches.Comment: 11 pages, 6 figure

    Gravitational oscillations in multidimensional anisotropic model with cosmological constant and their contributions into the energy of vacuum

    Full text link
    Were studied classical oscillations of background metric in the multidimensional anisotropic model of Kazner in the de-Sitter stage. Obtained dependence of fluctuations on dimension of space-time with infinite expansion. Stability of the model could be achieved when number of space-like dimensions equals or more then four. Were calculated contributions to the density of "vacuum energy", that are providing by proper oscillations of background metric and compared with contribution of cosmological arising of particles due to expansion. As it turned out, contribution of gravitational oscillation of metric into density of "vacuum energy" should play significant role in the de-Sitter stage

    Thermodynamic Properties of Generalized Exclusion Statistics

    Full text link
    We analytically calculate some thermodynamic quantities of an ideal gg-on gas obeying generalized exclusion statistics. We show that the specific heat of a gg-on gas (g0g \neq 0) vanishes linearly in any dimension as T0T \to 0 when the particle number is conserved and exhibits an interesting dual symmetry that relates the particle-statistics at gg to the hole-statistics at 1/g1/g at low temperatures. We derive the complete solution for the cluster coefficients bl(g)b_l(g) as a function of Haldane's statistical interaction gg in DD dimensions. We also find that the cluster coefficients bl(g)b_l(g) and the virial coefficients al(g)a_l(g) are exactly mirror symmetric (ll=odd) or antisymmetric (ll=even) about g=1/2g=1/2. In two dimensions, we completely determine the closed forms about the cluster and the virial coefficients of the generalized exclusion statistics, which exactly agree with the virial coefficients of an anyon gas of linear energies. We show that the gg-on gas with zero chemical potential shows thermodynamic properties similar to the photon statistics. We discuss some physical implications of our results.Comment: 24 pages, Revtex, Corrected typo

    Prediction and explanation in the multiverse

    Get PDF
    Probabilities in the multiverse can be calculated by assuming that we are typical representatives in a given reference class. But is this class well defined? What should be included in the ensemble in which we are supposed to be typical? There is a widespread belief that this question is inherently vague, and that there are various possible choices for the types of reference objects which should be counted in. Here we argue that the ``ideal'' reference class (for the purpose of making predictions) can be defined unambiguously in a rather precise way, as the set of all observers with identical information content. When the observers in a given class perform an experiment, the class branches into subclasses who learn different information from the outcome of that experiment. The probabilities for the different outcomes are defined as the relative numbers of observers in each subclass. For practical purposes, wider reference classes can be used, where we trace over all information which is uncorrelated to the outcome of the experiment, or whose correlation with it is beyond our current understanding. We argue that, once we have gathered all practically available evidence, the optimal strategy for making predictions is to consider ourselves typical in any reference class we belong to, unless we have evidence to the contrary. In the latter case, the class must be correspondingly narrowed.Comment: Minor clarifications adde

    Finding Evidence for Massive Neutrinos using 3D Weak Lensing

    Full text link
    In this paper we investigate the potential of 3D cosmic shear to constrain massive neutrino parameters. We find that if the total mass is substantial (near the upper limits from LSS, but setting aside the Ly alpha limit for now), then 3D cosmic shear + Planck is very sensitive to neutrino mass and one may expect that a next generation photometric redshift survey could constrain the number of neutrinos N_nu and the sum of their masses m_nu to an accuracy of dN_nu ~ 0.08 and dm_nu ~ 0.03 eV respectively. If in fact the masses are close to zero, then the errors weaken to dN_nu ~ 0.10 and dm_nu~0.07 eV. In either case there is a factor 4 improvement over Planck alone. We use a Bayesian evidence method to predict joint expected evidence for N_nu and m_nu. We find that 3D cosmic shear combined with a Planck prior could provide `substantial' evidence for massive neutrinos and be able to distinguish `decisively' between many competing massive neutrino models. This technique should `decisively' distinguish between models in which there are no massive neutrinos and models in which there are massive neutrinos with |N_nu-3| > 0.35 and m_nu > 0.25 eV. We introduce the notion of marginalised and conditional evidence when considering evidence for individual parameter values within a multi-parameter model.Comment: 9 pages, 2 Figures, 2 Tables, submitted to Physical Review
    corecore