8 research outputs found

    REMI and ROUSE: Quantitative Models for Long-Term and Short-Term Priming in Perceptual Identification

    Get PDF
    The REM model originally developed for recognition memory (Shiffrin & Steyvers, 1997) has recently been extended to implicit memory phenomena observed during threshold identification of words. We discuss two REM models based on Bayesian principles: a model for long-term priming (REMI; Schooler, Shiffrin, & Raaijmakers, 1999), and a model for short-term priming (ROUSE; Huber, Shiffrin, Lyle, & Ruys, in press). Although the identification tasks are the same, the basis for priming differs in the two models. In both paradigms we ask whether prior study merely reflects a bias to interpret ambiguous information in a certain manner, or instead leads to more efficient encoding. The observation of a ‘both-primed benefit’ in two-alternative forced-choice paradigms appears to show that both processes are present. However, the REMI model illustrates that the both-primed benefit is not necessarily indicative of an increase in perceptual sensitivity but might be generated by a criterion bias. The ROUSE model demonstrates how the amount of attention paid to the prime, and the consequent effect upon decision making, may lead to the reversal of the normal short-term priming effect that is observed in certain conditions

    Weekly reports for R.V. Polarstern expedition PS103 (2016-12-16 - 2017-02-03, Cape Town - Punta Arenas), German and English version

    Get PDF
    Priming is arguably one of the key phenomena in contemporary social psychology. Recent retractions and failed replication attempts have led to a division in the field between proponents and skeptics and have reinforced the importance of confirming certain priming effects through replication. In this study, we describe the results of 2 preregistered replication attempts of 1 experiment by Förster and Denzler (2012). In both experiments, participants first processed letters either globally or locally, then were tested using a typicality rating task. Bayes factor hypothesis tests were conducted for both experiments: Experiment 1(N = 100) yielded an indecisive Bayes factor of 1.38, indicating that the in-lab data are 1.38 times more likely to have occurred under the null hypothesis than under the alternative. Experiment 2 (N = 908) yielded a Bayes factor of 10.84, indicating strong support for the null hypothesis that global priming does not affect participants' mean typicality ratings. The failure to replicate this priming effect challenges existing support for the GLOMOsys model

    Data from a pre-publication independent replication initiative examining ten moral judgement effects

    Get PDF
    We present the data from a crowdsourced project seeking to replicate findings in independent laboratories before (rather than after) they are published. In this Pre-Publication Independent Replication (PPIR) initiative, 25 research groups attempted to replicate 10 moral judgment effects from a single laboratory's research pipeline of unpublished findings. The 10 effects were investigated using online/lab surveys containing psychological manipulations (vignettes) followed by questionnaires. Results revealed a mix of reliable, unreliable, and culturally moderated findings. Unlike any previous replication project, this dataset includes the data from not only the replications but also from the original studies, creating a unique corpus that researchers can use to better understand reproducibility and irreproducibility in science

    The pipeline project: Pre-publication independent replications of a single laboratory's research pipeline

    Get PDF
    This crowdsourced project introduces a collaborative approach to improving the reproducibility of scientific research, in which findings are replicated in qualified independent laboratories before (rather than after) they are published. Our goal is to establish a non-adversarial replication process with highly informative final results. To illustrate the Pre-Publication Independent Replication (PPIR) approach, 25 research groups conducted replications of all ten moral judgment effects which the last author and his collaborators had “in the pipeline” as of August 2014. Six findings replicated according to all replication criteria, one finding replicated but with a significantly smaller effect size than the original, one finding replicated consistently in the original culture but not outside of it, and two findings failed to find support. In total, 40% of the original findings failed at least one major replication criterion. Potential ways to implement and incentivize pre-publication independent replication on a large scale are discussed

    Bayesian estimation of multinomial processing tree models with heterogeneity in participants and items

    Get PDF
    Multinomial processing tree (MPT) models are theoretically motivated stochastic models for the analysis of categorical data. Here we focus on a crossed-random effects extension of the Bayesian latent-trait pair-clustering MPT model. Our approach assumes that participant and item effects combine additively on the probit scale and postulates (multivariate) normal distributions for the random effects. We provide a WinBUGS implementation of the crossed-random effects pair-clustering model and an application to novel experimental data. The present approach may be adapted to handle other MPT models

    Data from: Estimating the reproducibility of psychological science

    No full text
    This record contains the underlying research data for the publication "Estimating the reproducibility of psychological science" and the full-text is available from: https://ink.library.smu.edu.sg/lkcsb_research/5257Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams
    corecore