700 research outputs found

    A Tutorial on Fisher Information

    Get PDF
    In many statistical applications that concern mathematical psychologists, the concept of Fisher information plays an important role. In this tutorial we clarify the concept of Fisher information as it manifests itself across three different statistical paradigms. First, in the frequentist paradigm, Fisher information is used to construct hypothesis tests and confidence intervals using maximum likelihood estimators; second, in the Bayesian paradigm, Fisher information is used to define a default prior; lastly, in the minimum description length paradigm, Fisher information is used to measure model complexity

    Evidential Calibration of Confidence Intervals

    Full text link
    We present a novel and easy-to-use method for calibrating error-rate based confidence intervals to evidence-based support intervals. Support intervals are obtained from inverting Bayes factors based on a parameter estimate and its standard error. A kk support interval can be interpreted as "the observed data are at least kk times more likely under the included parameter values than under a specified alternative". Support intervals depend on the specification of prior distributions for the parameter under the alternative, and we present several types that allow different forms of external knowledge to be encoded. We also show how prior specification can to some extent be avoided by considering a class of prior distributions and then computing so-called minimum support intervals which, for a given class of priors, have a one-to-one mapping with confidence intervals. We also illustrate how the sample size of a future study can be determined based on the concept of support. Finally, we show how the bound for the type I error rate of Bayes factors leads to a bound for the coverage of support intervals. An application to data from a clinical trial illustrates how support intervals can lead to inferences that are both intuitive and informative

    Evidential Calibration of Confidence Intervals

    Get PDF

    Density of dispersal sources affects to what extent restored habitat is used: A case study on a red-listed wood-dependent beetle

    Get PDF
    When restoring habitats, an important question is whether the spatial distribution of habitat affects its contribution to biodiversity conservation. In Sweden, high-cut stumps are routinely created at forestry operations. By counting the number of exit holes of a red-listed beetle, Peltis grossa, we assessed occurrence, colonisations and extinctions per high-cut stump and beetle density per clear-cut. We found a threshold, at which the form of the relationship between density of the beetle and density of high-cut stumps per clear-cut changes abruptly. The beetle density was considerably higher where the density of high-cut stumps exceeded 4.5 per hectare. Such thresholds can be explained by colonisation-extinction processes. Observed colonisation-extinction dynamics were consistent with metapopulation theory. For instance, there was a positive relationship between colonisation rate and a connectivity measure that considered beetle abundance and distance for each high-cut stump in the surrounding area. However, the relationship disappeared when using a connectivity measure solely based on the distance of the high-cut stumps. The observed threshold implies that P. grossa benefits from aggregating the same total number of created high-cut stumps into fewer clear-cuts. This is because the total area with a density of high-cut stumps exceeding the threshold increases, and this expands the number and size of dispersal sources. Therefore, P. grossa and other species that reveal thresholds in their distribution patterns, are favoured when conservation measures are more spatially aggregated than what is resulting from current Swedish policy

    Informed Bayesian t-Tests

    Get PDF
    Across the empirical sciences, few statistical procedures rival the popularity of the frequentist (Formula presented.) -test. In contrast, the Bayesian versions of the (Formula presented.) -test have languished in obscurity. In recent years, however, the theoretical and practical advantages of the Bayesian (Formula presented.) -test have become increasingly apparent and various Bayesian t-tests have been proposed, both objective ones (based on general desiderata) and subjective ones (based on expert knowledge). Here, we propose a flexible t-prior for standardized effect size that allows computation of the Bayes factor by evaluating a single numerical integral. This specification contains previous objective and subjective t-test Bayes factors as special cases. Furthermore, we propose two measures for informed prior distributions that quantify the departure from the objective Bayes factor desiderata of predictive matching and information consistency. We illustrate the use of informed prior distributions based on an expert prior elicitation effort. Supplementary materials for this article are available online

    Replication Bayes factors from evidence updating

    Get PDF
    We describe a general method that allows experimenters to quantify the evidence from the data of a direct replication attempt given data already acquired from an original study. These so-called replication Bayes factors are a reconceptualization of the ones introduced by Verhagen and Wagenmakers (Journal of Experimental Psychology: General, 143(4), 1457–1475 2014) for the common t test. This reconceptualization is computationally simpler and generalizes easily to most common experimental designs for which Bayes factors are available

    History and nature of the Jeffreys–Lindley paradox

    Get PDF
    The Jeffreys–Lindley paradox exposes a rift between Bayesian and frequentist hypothesis testing that strikes at the heart of statistical inference. Contrary to what most current literature suggests, the paradox was central to the Bayesian testing methodology developed by Sir Harold Jeffreys in the late 1930s. Jeffreys showed that the evidence for a point-null hypothesis H0 scales with √n and repeatedly argued that it would, therefore, be mistaken to set a threshold for rejecting H0 at a constant multiple of the standard error. Here, we summarize Jeffreys’s early work on the paradox and clarify his reasons for including the √n term. The prior distribution is seen to play a crucial role; by implicitly correcting for selection, small parameter values are identified as relatively surprising under H1. We highlight the general nature of the paradox by presenting both a fully frequentist and a fully Bayesian version. We also demonstrate that the paradox does not depend on assigning prior mass to a point hypothesis, as is commonly believed
    • …
    corecore