3,506 research outputs found

    Why bayesian “evidence for H1” in one condition and bayesian “evidence for H0” in another condition does not mean good-enough bayesian evidence for a difference between the conditions

    Get PDF
    Psychologists are often interested in whether an independent variable has a different effect in condition A than in condition B. To test such a question, one needs to directly compare the effect of that variable in the two conditions (i.e., test the interaction). Yet many researchers tend to stop when they find a significant test in one condition and a nonsignificant test in the other condition, deeming this as sufficient evidence for a difference between the two conditions. In this Tutorial, we aim to raise awareness of this inferential mistake when Bayes factors are used with conventional cutoffs to draw conclusions. For instance, some researchers might falsely conclude that there must be good-enough evidence for the interaction if they find good-enough Bayesian evidence for the alternative hypothesis, H1, in condition A and good-enough Bayesian evidence for the null hypothesis, H0, in condition B. The case study we introduce highlights that ignoring the test of the interaction can lead to unjustified conclusions and demonstrates that the principle that any assertion about the existence of an interaction necessitates the direct comparison of the conditions is as true for Bayesian as it is for frequentist statistics. We provide an R script of the analyses of the case study and a Shiny app that can be used with a 2 Ă— 2 design to develop intuitions on this issue, and we introduce a rule of thumb with which one can estimate the sample size one might need to have a well-powered design

    Cosmic microwave background constraints on cosmological models with large-scale isotropy breaking

    Get PDF
    Several anomalies appear to be present in the large-angle cosmic microwave background (CMB) anisotropy maps of WMAP, including the alignment of large-scale multipoles. Models in which isotropy is spontaneously broken (e.g., by a scalar field) have been proposed as explanations for these anomalies, as have models in which a preferred direction is imposed during inflation. We examine models inspired by these, in which isotropy is broken by a multiplicative factor with dipole and/or quadrupole terms. We evaluate the evidence provided by the multipole alignment using a Bayesian framework, finding that the evidence in favor of the model is generally weak. We also compute approximate changes in estimated cosmological parameters in the broken-isotropy models. Only the overall normalization of the power spectrum is modified significantly.Comment: Accepted for publication in Phys. Rev.

    Determining the Neutrino Mass Hierarchy with Cosmology

    Full text link
    The combination of current large scale structure and cosmic microwave background (CMB) anisotropies data can place strong constraints on the sum of the neutrino masses. Here we show that future cosmic shear experiments, in combination with CMB constraints, can provide the statistical accuracy required to answer questions about differences in the mass of individual neutrino species. Allowing for the possibility that masses are non-degenerate we combine Fisher matrix forecasts for a weak lensing survey like Euclid with those for the forthcoming Planck experiment. Under the assumption that neutrino mass splitting is described by a normal hierarchy we find that the combination Planck and Euclid will possibly reach enough sensitivity to put a constraint on the mass of a single species. Using a Bayesian evidence calculation we find that such future experiments could provide strong evidence for either a normal or an inverted neutrino hierachy. Finally we show that if a particular neutrino hierachy is assumed then this could bias cosmological parameter constraints, for example the dark energy equation of state parameter, by > 1\sigma, and the sum of masses by 2.3\sigma.Comment: 9 pages, 6 figures, 3 table

    Might EPR particles communicate through a wormhole?

    Get PDF
    We consider the two-particle wave function of an Einstein-Podolsky-Rosen system, given by a two dimensional relativistic scalar field model. The Bohm-de Broglie interpretation is applied and the quantum potential is viewed as modifying the Minkowski geometry. In this way an effective metric, which is analogous to a black hole metric in some limited region, is obtained in one case and a particular metric with singularities appears in the other case, opening the possibility, following Holland, of interpreting the EPR correlations as being originated by an effective wormhole geometry, through which the physical signals can propagate.Comment: Corrected version, to appears in EP

    How to construct spin chains with perfect state transfer

    Full text link
    It is shown how to systematically construct the XXXX quantum spin chains with nearest-neighbor interactions that allow perfect state transfer (PST). Sets of orthogonal polynomials (OPs) are in correspondence with such systems. The key observation is that for any admissible one-excitation energy spectrum, the weight function of the associated OPs is uniquely prescribed. This entails the complete characterization of these PST models with the mirror symmetry property arising as a corollary. A simple and efficient algorithm to obtain the corresponding Hamiltonians is presented. A new model connected to a special case of the symmetric qq-Racah polynomials is offered. It is also explained how additional models with PST can be derived from a parent system by removing energy levels from the one-excitation spectrum of the latter. This is achieved through Christoffel transformations and is also completely constructive in regards to the Hamiltonians.Comment: 7 page

    Application of Bayesian model averaging to measurements of the primordial power spectrum

    Get PDF
    Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model Evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940 < n_s < 1.000, where n_s is specified at a pivot scale 0.015 Mpc^{-1}. For the tensors model averaging can tighten the credible upper limit, depending on prior assumptions.Comment: 7 pages with 7 figures include

    Non-equilibrium Thermodynamics and Fluctuations

    Full text link
    In the last ten years, a number of ``Conventional Fluctuation Theorems'' have been derived for systems with deterministic or stochastic dynamics, in a transient or in a non-equilibrium stationary state. These theorems gave explicit expressions for the ratio of the probability to find the system with a certain value of entropy (or heat) production to that of finding the opposite value. A similar theorem for the fluctuations of the work done on a system has recently been demonstrated experimentally for a simple system in a transient state, consisting of a Brownian particle in water, confined by a moving harmonic potential. In this paper we show that because of the interaction between the stochastic motion of the particle in water and its deterministic motion in the potential, very different new heat theorems are found than in the conventional case. One of the consequences of these new heat Fluctuation Theorems is that the ratio of the probability for the Brownian particle to absorb heat from rather than supply heat to the water is much larger than in the Conventional Fluctuation Theorem. This could be of relevance for micro/nano-technology.Comment: 10 pages, 6 figures. Some corrections in the text were made. Submitted to Physica

    Extended Heat-Fluctuation Theorems for a System with Deterministic and Stochastic Forces

    Full text link
    Heat fluctuations over a time \tau in a non-equilibrium stationary state and in a transient state are studied for a simple system with deterministic and stochastic components: a Brownian particle dragged through a fluid by a harmonic potential which is moved with constant velocity. Using a Langevin equation, we find the exact Fourier transform of the distribution of these fluctuations for all \tau. By a saddle-point method we obtain analytical results for the inverse Fourier transform, which, for not too small \tau, agree very well with numerical results from a sampling method as well as from the fast Fourier transform algorithm. Due to the interaction of the deterministic part of the motion of the particle in the mechanical potential with the stochastic part of the motion caused by the fluid, the conventional heat fluctuation theorem is, for infinite and for finite \tau, replaced by an extended fluctuation theorem that differs noticeably and measurably from it. In particular, for large fluctuations, the ratio of the probability for absorption of heat (by the particle from the fluid) to the probability to supply heat (by the particle to the fluid) is much larger here than in the conventional fluctuation theorem.Comment: 23 pages, 6 figures. Figures are now in color, Eq. (67) was corrected and a footnote was added on the d-dimensional cas
    • …
    corecore