180 research outputs found

    bridgesampling: An R Package for Estimating Normalizing Constants

    Full text link
    Statistical procedures such as Bayes factor model selection and Bayesian model averaging require the computation of normalizing constants (e.g., marginal likelihoods). These normalizing constants are notoriously difficult to obtain, as they usually involve high-dimensional integrals that cannot be solved analytically. Here we introduce an R package that uses bridge sampling (Meng & Wong, 1996; Meng & Schilling, 2002) to estimate normalizing constants in a generic and easy-to-use fashion. For models implemented in Stan, the estimation procedure is automatic. We illustrate the functionality of the package with three examples

    Informed Bayesian T-Tests

    Full text link
    Across the empirical sciences, few statistical procedures rival the popularity of the frequentist t-test. In contrast, the Bayesian versions of the t-test have languished in obscurity. In recent years, however, the theoretical and practical advantages of the Bayesian t-test have become increasingly apparent and various Bayesian t-tests have been proposed, both objective ones (based on general desiderata) and subjective ones (based on expert knowledge). Here we propose a flexible t-prior for standardized effect size that allows computation of the Bayes factor by evaluating a single numerical integral. This specification contains previous objective and subjective t-test Bayes factors as special cases. Furthermore, we propose two measures for informed prior distributions that quantify the departure from the objective Bayes factor desiderata of predictive matching and information consistency. We illustrate the use of informed prior distributions based on an expert prior elicitation effort

    bridgesampling: An R Package for Estimating Normalizing Constants

    Get PDF
    Statistical procedures such as Bayes factor model selection and Bayesian model averaging require the computation of normalizing constants (e.g., marginal likelihoods). These normalizing constants are notoriously difficult to obtain, as they usually involve highdimensional integrals that cannot be solved analytically. Here we introduce an R package that uses bridge sampling (Meng and Wong 1996; Meng and Schilling 2002) to estimate normalizing constants in a generic and easy-to-use fashion. For models implemented in Stan, the estimation procedure is automatic. We illustrate the functionality of the package with three examples

    Informed Bayesian Inference for the A/B Test

    Get PDF
    Booming in business and a staple analysis in medical trials, the A/B test assesses the effect of an intervention or treatment by comparing its success rate with that of a control condition. Across many practical applications, it is desirable that (1) evidence can be obtained in favor of the null hypothesis that the treatment is ineffective; (2) evidence can be monitored as the data accumulate; (3) expert prior knowledge can be taken into account. Most existing approaches do not fulfill these desiderata. Here we describe a Bayesian A/B procedure based on Kass and Vaidyanathan (1992) that allows one to monitor the evidence for the hypotheses that the treatment has either a positive effect, a negative effect, or, crucially, no effect. Furthermore, this approach enables one to incorporate expert knowledge about the relative prior plausibility of the rival hypotheses and about the expected size of the effect, given that it is non-zero. To facilitate the wider adoption of this Bayesian procedure we developed the abtest package in R. We illustrate the package options and the associated statistical results with a fictitious business example and a real data medical example

    Informed Bayesian t-Tests

    Get PDF
    Across the empirical sciences, few statistical procedures rival the popularity of the frequentist (Formula presented.) -test. In contrast, the Bayesian versions of the (Formula presented.) -test have languished in obscurity. In recent years, however, the theoretical and practical advantages of the Bayesian (Formula presented.) -test have become increasingly apparent and various Bayesian t-tests have been proposed, both objective ones (based on general desiderata) and subjective ones (based on expert knowledge). Here, we propose a flexible t-prior for standardized effect size that allows computation of the Bayes factor by evaluating a single numerical integral. This specification contains previous objective and subjective t-test Bayes factors as special cases. Furthermore, we propose two measures for informed prior distributions that quantify the departure from the objective Bayes factor desiderata of predictive matching and information consistency. We illustrate the use of informed prior distributions based on an expert prior elicitation effort. Supplementary materials for this article are available online

    Quantifying Support for the Null Hypothesis in Psychology: An Empirical Investigation

    Get PDF
    In the traditional statistical framework, nonsignificant results leave researchers in a state of suspended disbelief. In this study, we examined, empirically, the treatment and evidential impact of nonsignificant results. Our specific goals were twofold: to explore how psychologists interpret and communicate nonsignificant results and to assess how much these results constitute evidence in favor of the null hypothesis. First, we examined all nonsignificant findings mentioned in the abstracts of the 2015 volumes of Psychonomic Bulletin & Review, Journal of Experimental Psychology: General, and Psychological Science (N = 137). In 72% of these cases, nonsignificant results were misinterpreted, in that the authors inferred that the effect was absent. Second, a Bayes factor reanalysis revealed that fewer than 5% of the nonsignificant findings provided strong evidence (i.e., BF01 > 10) in favor of the null hypothesis over the alternative hypothesis. We recommend that researchers expand their statistical tool kit in order to correctly interpret nonsignificant results and to be able to evaluate the evidence for and against the null hypothesis

    Bayesian model-averaged meta-analysis in medicine

    Get PDF
    We outline a Bayesian model-averaged (BMA) meta-analysis for standardized mean differences in order to quantify evidence for both treatment effectiveness (Formula presented.) and across-study heterogeneity (Formula presented.). We construct four competing models by orthogonally combining two present-absent assumptions, one for the treatment effect and one for across-study heterogeneity. To inform the choice of prior distributions for the model parameters, we used 50% of the Cochrane Database of Systematic Reviews to specify rival prior distributions for (Formula presented.) and (Formula presented.). The relative predictive performance of the competing models and rival prior distributions was assessed using the remaining 50% of the Cochrane Database. On average, (Formula presented.) —the model that assumes the presence of a treatment effect as well as across-study heterogeneity—outpredicted the other models, but not by a large margin. Within (Formula presented.), predictive adequacy was relatively constant across the rival prior distributions. We propose specific empirical prior distributions, both for the field in general and for each of 46 specific medical subdisciplines. An example from oral health demonstrates how the proposed prior distributions can be used to conduct a BMA meta-analysis in the open-source software R and JASP. The preregistered analysis plan is available at https://osf.io/zs3df/

    Inclusion Bayes factors for mixed hierarchical diffusion decision models

    Get PDF
    Cognitive models provide a substantively meaningful quantitative description of latent cognitive processes. The quantitative formulation of these models supports cumulative theory building and enables strong empirical tests. However, the nonlinearity of these models and pervasive correlations among model parameters pose special challenges when applying cognitive models to data. Firstly, estimating cognitive models typically requires large hierarchical data sets that need to be accommodated by an appropriate statistical structure within the model. Secondly, statistical inference needs to appropriately account for model uncertainty to avoid overconfidence and biased parameter estimates. In the present work, we show how these challenges can be addressed through a combination of Bayesian hierarchical modeling and Bayesian model averaging. To illustrate these techniques, we apply the popular diffusion decision model to data from a collaborative selective influence study
    • …
    corecore