705 research outputs found
A Tutorial on Fisher Information
In many statistical applications that concern mathematical psychologists, the
concept of Fisher information plays an important role. In this tutorial we
clarify the concept of Fisher information as it manifests itself across three
different statistical paradigms. First, in the frequentist paradigm, Fisher
information is used to construct hypothesis tests and confidence intervals
using maximum likelihood estimators; second, in the Bayesian paradigm, Fisher
information is used to define a default prior; lastly, in the minimum
description length paradigm, Fisher information is used to measure model
complexity
Evidential Calibration of Confidence Intervals
We present a novel and easy-to-use method for calibrating error-rate based
confidence intervals to evidence-based support intervals. Support intervals are
obtained from inverting Bayes factors based on a parameter estimate and its
standard error. A support interval can be interpreted as "the observed data
are at least times more likely under the included parameter values than
under a specified alternative". Support intervals depend on the specification
of prior distributions for the parameter under the alternative, and we present
several types that allow different forms of external knowledge to be encoded.
We also show how prior specification can to some extent be avoided by
considering a class of prior distributions and then computing so-called minimum
support intervals which, for a given class of priors, have a one-to-one mapping
with confidence intervals. We also illustrate how the sample size of a future
study can be determined based on the concept of support. Finally, we show how
the bound for the type I error rate of Bayes factors leads to a bound for the
coverage of support intervals. An application to data from a clinical trial
illustrates how support intervals can lead to inferences that are both
intuitive and informative
Density of dispersal sources affects to what extent restored habitat is used: A case study on a red-listed wood-dependent beetle
When restoring habitats, an important question is whether the spatial distribution of habitat affects its contribution to biodiversity conservation. In Sweden, high-cut stumps are routinely created at forestry operations. By counting the number of exit holes of a red-listed beetle, Peltis grossa, we assessed occurrence, colonisations and extinctions per high-cut stump and beetle density per clear-cut. We found a threshold, at which the form of the relationship between density of the beetle and density of high-cut stumps per clear-cut changes abruptly. The beetle density was considerably higher where the density of high-cut stumps exceeded 4.5 per hectare. Such thresholds can be explained by colonisation-extinction processes. Observed colonisation-extinction dynamics were consistent with metapopulation theory. For instance, there was a positive relationship between colonisation rate and a connectivity measure that considered beetle abundance and distance for each high-cut stump in the surrounding area. However, the relationship disappeared when using a connectivity measure solely based on the distance of the high-cut stumps. The observed threshold implies that P. grossa benefits from aggregating the same total number of created high-cut stumps into fewer clear-cuts. This is because the total area with a density of high-cut stumps exceeding the threshold increases, and this expands the number and size of dispersal sources. Therefore, P. grossa and other species that reveal thresholds in their distribution patterns, are favoured when conservation measures are more spatially aggregated than what is resulting from current Swedish policy
Informed Bayesian t-Tests
Across the empirical sciences, few statistical procedures rival the popularity of the frequentist (Formula presented.) -test. In contrast, the Bayesian versions of the (Formula presented.) -test have languished in obscurity. In recent years, however, the theoretical and practical advantages of the Bayesian (Formula presented.) -test have become increasingly apparent and various Bayesian t-tests have been proposed, both objective ones (based on general desiderata) and subjective ones (based on expert knowledge). Here, we propose a flexible t-prior for standardized effect size that allows computation of the Bayes factor by evaluating a single numerical integral. This specification contains previous objective and subjective t-test Bayes factors as special cases. Furthermore, we propose two measures for informed prior distributions that quantify the departure from the objective Bayes factor desiderata of predictive matching and information consistency. We illustrate the use of informed prior distributions based on an expert prior elicitation effort. Supplementary materials for this article are available online
Replication Bayes factors from evidence updating
We describe a general method that allows experimenters to quantify the evidence from the data of a direct replication attempt given data already acquired from an original study. These so-called replication Bayes factors are a reconceptualization of the ones introduced by Verhagen and Wagenmakers (Journal of Experimental Psychology: General, 143(4), 1457–1475 2014) for the common t test. This reconceptualization is computationally simpler and generalizes easily to most common experimental designs for which Bayes factors are available
History and nature of the Jeffreys–Lindley paradox
The Jeffreys–Lindley paradox exposes a rift between Bayesian and frequentist hypothesis testing that strikes at the heart of statistical inference. Contrary to what most current literature suggests, the paradox was central to the Bayesian testing methodology developed by Sir Harold Jeffreys in the late 1930s. Jeffreys showed that the evidence for a point-null hypothesis H0 scales with √n and repeatedly argued that it would, therefore, be mistaken to set a threshold for rejecting H0 at a constant multiple of the standard error. Here, we summarize Jeffreys’s early work on the paradox and clarify his reasons for including the √n term. The prior distribution is seen to play a crucial role; by implicitly correcting for selection, small parameter values are identified as relatively surprising under H1. We highlight the general nature of the paradox by presenting both a fully frequentist and a fully Bayesian version. We also demonstrate that the paradox does not depend on assigning prior mass to a point hypothesis, as is commonly believed
Recommended from our members
NKT Cells Stimulated by Long Fatty Acyl Chain Sulfatides Significantly Reduces the Incidence of Type 1 Diabetes in Nonobese Diabetic Mice
Sulfatide-reactive type II NKT cells have been shown to regulate autoimmunity and anti-tumor immunity. Although, two major isoforms of sulfatide, C16:0 and C24:0, are enriched in the pancreas, their relative role in autoimmune diabetes is not known. Here, we report that sulfatide/CD1d-tetramer cells accumulate in the draining pancreatic lymph nodes, and that treatment of NOD mice with sulfatide or C24:0 was more efficient than C16:0 in stimulating the NKT cell-mediated transfer of a delay in onset from T1D into NOD.Scid recipients. Using NOD.CD1d mice, we show that this delay of T1D is CD1d-dependent. Interestingly, the latter delay or protection from T1D is associated with the enhanced secretion of IL-10 rather than IFN-g by C24:0-treated CD4 T cells and the deviation of the islet-reactive diabetogenic T cell response. Both C16:0 and C24:0 sulfatide isoforms are unable to activate and expand type I iNKT cells. Collectively, these data suggest that C24:0 stimulated type II NKT cells may regulate protection from T1D by activating DCs to secrete IL-10 and suppress the activation and expansion of type I iNKT cells and diabetogenic T cells. Our results raise the possibility that C24:0 may be used therapeutically to delay the onset and protect from T1D in humans
- …