952 research outputs found
What to do with all these Bayes factors: How to make Bayesian reports in deception research more informative
Bayes factors quantify the evidence in support of the null (absence of an effect) or the alternative hypothesis (presence of an effect). Based on commonly used cut-offs, Bayes factors between 1/3 and 3 are interpreted as evidentially weak, and one typically concludes there is an absence of evidence. In this commentary on Warmelink, Subramanian, Tkacheva, and McLatchie (Legal Criminol Psychol 24, 2019, 258), we discuss how a Bayesian report can be made more informative. Firstly, this implies a departure from the labels provided by commonly used cut-offs when reporting Bayes factors. Instead, we encourage researchers to report the value of the Bayes factors, or to
Replication Bayes factors from evidence updating
We describe a general method that allows experimenters to quantify the evidence from the data of a direct replication attempt given data already acquired from an original study. These so-called replication Bayes factors are a reconceptualization of the ones introduced by Verhagen and Wagenmakers (Journal of Experimental Psychology: General, 143(4), 1457â1475 2014) for the common t test. This reconceptualization is computationally simpler and generalizes easily to most common experimental designs for which Bayes factors are available
Informed Bayesian t-Tests
Across the empirical sciences, few statistical procedures rival the popularity of the frequentist (Formula presented.) -test. In contrast, the Bayesian versions of the (Formula presented.) -test have languished in obscurity. In recent years, however, the theoretical and practical advantages of the Bayesian (Formula presented.) -test have become increasingly apparent and various Bayesian t-tests have been proposed, both objective ones (based on general desiderata) and subjective ones (based on expert knowledge). Here, we propose a flexible t-prior for standardized effect size that allows computation of the Bayes factor by evaluating a single numerical integral. This specification contains previous objective and subjective t-test Bayes factors as special cases. Furthermore, we propose two measures for informed prior distributions that quantify the departure from the objective Bayes factor desiderata of predictive matching and information consistency. We illustrate the use of informed prior distributions based on an expert prior elicitation effort. Supplementary materials for this article are available online
History and nature of the JeffreysâLindley paradox
The JeffreysâLindley paradox exposes a rift between Bayesian and frequentist hypothesis testing that strikes at the heart of statistical inference. Contrary to what most current literature suggests, the paradox was central to the Bayesian testing methodology developed by Sir Harold Jeffreys in the late 1930s. Jeffreys showed that the evidence for a point-null hypothesis H0 scales with ân and repeatedly argued that it would, therefore, be mistaken to set a threshold for rejecting H0 at a constant multiple of the standard error. Here, we summarize Jeffreysâs early work on the paradox and clarify his reasons for including the ân term. The prior distribution is seen to play a crucial role; by implicitly correcting for selection, small parameter values are identified as relatively surprising under H1. We highlight the general nature of the paradox by presenting both a fully frequentist and a fully Bayesian version. We also demonstrate that the paradox does not depend on assigning prior mass to a point hypothesis, as is commonly believed
Generic E-Variables for Exact Sequential k-Sample Tests that allow for Optional Stopping
We develop E-variables for testing whether two or more data streams come from the same source or not, and more generally, whether the difference between the sources is larger than some minimal effect size. These E-variables lead to exact, nonasymptotic tests that remain safe, i.e. keep their type-I error guarantees, under flexible sampling scenarios such as optional stopping and continuation. In special cases our E-variables also have an optimal 'growth' property under the alternative. While the construction is generic, we illustrate it through the special case of k x 2 contingency tables, where we also allow for the incorporation of different restrictions on a composite alternative. Comparison to p-value analysis in simulations and a real-world example show that E-variables, through their flexibility, often allow for early stopping of data collection, thereby retaining similar power as classical methods, while also retaining the option of extending or combining data afterwards
Two-sample tests that are safe under optional stopping
We develop E variables for testing whether two data streams come from the same source or not, and more generally, whether the difference between the sources is larger than some minimal effect size. These E variables lead to tests that remain safe, i.e. keep their Type-I error guarantees, under flexible sampling scenarios such as optional stopping and continuation. In special cases our E variables also have an optimal `growth' property under the alternative. We illustrate the generic construction through the special case of 2x2 contingency tables, where we also allow for the incorporation of different restrictions on a composite alternative. Comparison to p-value analysis in simulations and a real-world example show that E variables, through their flexibility, often allow for early stopping of data collection, thereby retaining similar power as classical methods
Fluctuation-Facilitated Charge Migration along DNA
We propose a model Hamiltonian for charge transfer along the DNA double helix
with temperature driven fluctuations in the base pair positions acting as the
rate limiting factor for charge transfer between neighboring base pairs. We
compare the predictions of the model with the recent work of J.K. Barton and
A.H. Zewail (Proc.Natl.Acad.Sci.USA, {\bf 96}, 6014 (1999)) on the unusual
two-stage charge transfer of DNA.Comment: 4 pages, 2 figure
Psychedelics Promote Structural and Functional Neural Plasticity.
Atrophy of neurons in the prefrontal cortex (PFC) plays a key role in the pathophysiology of depression and related disorders. The ability to promote both structural and functional plasticity in the PFC has been hypothesized to underlie the fast-acting antidepressant properties of the dissociative anesthetic ketamine. Here, we report that, like ketamine, serotonergic psychedelics are capable of robustly increasing neuritogenesis and/or spinogenesis both in vitro and in vivo. These changes in neuronal structure are accompanied by increased synapse number and function, as measured by fluorescence microscopy and electrophysiology. The structural changes induced by psychedelics appear to result from stimulation of the TrkB, mTOR, and 5-HT2A signaling pathways and could possibly explain the clinical effectiveness of these compounds. Our results underscore the therapeutic potential of psychedelics and, importantly, identify several lead scaffolds for medicinal chemistry efforts focused on developing plasticity-promoting compounds as safe, effective, and fast-acting treatments for depression and related disorders
The Bayes factor and its implementation in JASP: A practical primer
Statistical inference plays a critical role in modern scientific research, however, the dominant method for statistical inference in science, null hypothesis significance testing (NHST), is often misunderstood and misused, which leads to unreproducible findings. To address this issue, researchers propose to adopt the Bayes factor as an alternative to NHST. The Bayes factor is a principled Bayesian tool for model selection and hypothesis testing, and can be interpreted as the strength for both the null hypothesis H0 and the alternative hypothesis H1 based on the current data. Compared to NHST, the Bayes factor has the following advantages: it quantifies the evidence that the data provide for both the H0 and the H1, it is not âviolently biasedâ against H0, it allows one to monitor the evidence as the data accumulate, and it does not depend on sampling plans. Importantly, the recently developed open software JASP makes the calculation of Bayes factor accessible for most researchers in psychology, as we demonstrated for the t-test. Given these advantages, adopting the Bayes factor will improve psychological researchersâ statistical inferences. Nevertheless, to make the analysis more reproducible, researchers should keep their data analysis transparent and open
Emission-Line Galaxies from the Hubble Space Telescope Probing Evolution and Reionization Spectroscopically (PEARS) Grism Survey. II: The Complete Sample
We present a full analysis of the Probing Evolution And Reionization
Spectroscopically (PEARS) slitess grism spectroscopic data obtained with the
Advanced Camera for Surveys on HST. PEARS covers fields within both the Great
Observatories Origins Deep Survey (GOODS) North and South fields, making it
ideal as a random survey of galaxies, as well as the availability of a wide
variety of ancillary observations to support the spectroscopic results. Using
the PEARS data we are able to identify star forming galaxies within the
redshift volume 0< z<1.5. Star forming regions in the PEARS survey are
pinpointed independently of the host galaxy. This method allows us to detect
the presence of multiple emission line regions (ELRs) within a single galaxy.
1162 Ha, [OIII] and/or [OII] emission lines have been identified in the PEARS
sample of ~906 galaxies down to a limiting flux of ~1e-18 erg/s/cm^2. The ELRs
have also been compared to the properties of the host galaxy, including
morphology, luminosity, and mass. From this analysis we find three key results:
1) The computed line luminosities show evidence of a flattening in the
luminosity function with increasing redshift; 2) The star forming systems show
evidence of disturbed morphologies, with star formation occurring predominantly
within one effective (half-light) radius. However, the morphologies show no
correlation with host stellar mass; and 3) The number density of star forming
galaxies with M_* > 1e9} M_sun decreases by an order of magnitude at z<0.5
relative to the number at 0.5<z<0.9 in support of the argument for galaxy
downsizing.Comment: Submitted. 48 pages. 19 figures. Accepted to Ap
- âŠ