487 research outputs found

    A Tutorial on Fisher Information

    Get PDF
    In many statistical applications that concern mathematical psychologists, the concept of Fisher information plays an important role. In this tutorial we clarify the concept of Fisher information as it manifests itself across three different statistical paradigms. First, in the frequentist paradigm, Fisher information is used to construct hypothesis tests and confidence intervals using maximum likelihood estimators; second, in the Bayesian paradigm, Fisher information is used to define a default prior; lastly, in the minimum description length paradigm, Fisher information is used to measure model complexity

    Evidential Calibration of Confidence Intervals

    Full text link
    We present a novel and easy-to-use method for calibrating error-rate based confidence intervals to evidence-based support intervals. Support intervals are obtained from inverting Bayes factors based on a parameter estimate and its standard error. A kk support interval can be interpreted as "the observed data are at least kk times more likely under the included parameter values than under a specified alternative". Support intervals depend on the specification of prior distributions for the parameter under the alternative, and we present several types that allow different forms of external knowledge to be encoded. We also show how prior specification can to some extent be avoided by considering a class of prior distributions and then computing so-called minimum support intervals which, for a given class of priors, have a one-to-one mapping with confidence intervals. We also illustrate how the sample size of a future study can be determined based on the concept of support. Finally, we show how the bound for the type I error rate of Bayes factors leads to a bound for the coverage of support intervals. An application to data from a clinical trial illustrates how support intervals can lead to inferences that are both intuitive and informative

    Evidential Calibration of Confidence Intervals

    Get PDF

    Chatter--a conversational telephone agent

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1993.Includes bibliographical references (leaves 126-130).by Eric Thich Vi Ly.M.S

    Informed Bayesian t-Tests

    Get PDF
    Across the empirical sciences, few statistical procedures rival the popularity of the frequentist (Formula presented.) -test. In contrast, the Bayesian versions of the (Formula presented.) -test have languished in obscurity. In recent years, however, the theoretical and practical advantages of the Bayesian (Formula presented.) -test have become increasingly apparent and various Bayesian t-tests have been proposed, both objective ones (based on general desiderata) and subjective ones (based on expert knowledge). Here, we propose a flexible t-prior for standardized effect size that allows computation of the Bayes factor by evaluating a single numerical integral. This specification contains previous objective and subjective t-test Bayes factors as special cases. Furthermore, we propose two measures for informed prior distributions that quantify the departure from the objective Bayes factor desiderata of predictive matching and information consistency. We illustrate the use of informed prior distributions based on an expert prior elicitation effort. Supplementary materials for this article are available online

    Replication Bayes factors from evidence updating

    Get PDF
    We describe a general method that allows experimenters to quantify the evidence from the data of a direct replication attempt given data already acquired from an original study. These so-called replication Bayes factors are a reconceptualization of the ones introduced by Verhagen and Wagenmakers (Journal of Experimental Psychology: General, 143(4), 1457–1475 2014) for the common t test. This reconceptualization is computationally simpler and generalizes easily to most common experimental designs for which Bayes factors are available

    No evidence for a putative involvement of platelet-activating factor in systemic lupus erythematosus without active nephritis.

    Get PDF
    BACKGROUND: Platelet-activating factor (PAF) seems to be implicated in systemic lupus erythematosus (SLE) patients with associated renal diseases. AIMS: In this study, we ensured the role of PAF in SLE patients without renal complications. METHODS: Blood PAF and acetylhydrolase activity, plasma soluble phospholipase A(2), and the presence of antibodies against PAF were investigated in 17 SLE patients without active nephritis and in 17 healthy controls. RESULTS: Blood PAF levels were not different (p=0.45) between SLE patients (6.7+/-2.8 pg/ml) and healthy subjects (9.6+/-3.1 pg/ml). Plasma acetylhydrolase activity (the PAF-degrading enzyme) was significantly (p=0.03) elevated in SLE patients (57.8+/-6.4 nmol/min/ml) as compared with controls (37.9+/-2.6 nmol/min/ml). Plasma soluble phospholipase A(2) (the key enzyme for PAF formation) was not different (p=0.6) between SLE patients (59.1+/-5.1 U/ml) and controls (54.7+/-2.4 U/ml). Antibodies against PAF were detected only in 3/17 SLE patients. Flow cytometry analysis did not highlight PAF receptors on circulating leukocytes of SLE patients. CONCLUSION: This clinical study highlights no evidence for a putative important role of PAF in SLE patients without active nephritis

    History and nature of the Jeffreys–Lindley paradox

    Get PDF
    The Jeffreys–Lindley paradox exposes a rift between Bayesian and frequentist hypothesis testing that strikes at the heart of statistical inference. Contrary to what most current literature suggests, the paradox was central to the Bayesian testing methodology developed by Sir Harold Jeffreys in the late 1930s. Jeffreys showed that the evidence for a point-null hypothesis H0 scales with √n and repeatedly argued that it would, therefore, be mistaken to set a threshold for rejecting H0 at a constant multiple of the standard error. Here, we summarize Jeffreys’s early work on the paradox and clarify his reasons for including the √n term. The prior distribution is seen to play a crucial role; by implicitly correcting for selection, small parameter values are identified as relatively surprising under H1. We highlight the general nature of the paradox by presenting both a fully frequentist and a fully Bayesian version. We also demonstrate that the paradox does not depend on assigning prior mass to a point hypothesis, as is commonly believed
    corecore