21 research outputs found

    Assessing Theoretical Conclusions With Blinded Inference to Investigate a Potential Inference Crisis

    Get PDF
    Scientific advances across a range of disciplines hinge on the ability to make inferences about unobservable theoretical entities on the basis of empirical data patterns. Accurate inferences rely on both discovering valid, replicable data patterns and accurately interpreting those patterns in terms of their implications for theoretical constructs. The replication crisis in science has led to widespread efforts to improve the reliability of research findings, but comparatively little attention has been devoted to the validity of inferences based on those findings. Using an example from cognitive psychology, we demonstrate a blinded-inference paradigm for assessing the quality of theoretical inferences from data. Our results reveal substantial variability in experts’ judgments on the very same data, hinting at a possible inference crisis

    The influences of valence and arousal on judgments of learning and on recall

    Get PDF
    Much is known about how the emotional content of words affects memory for those words, but only recently have researchers begun to investigate whether emotional content influences metamemory—that is, learners’ assessments of what is or is not memorable. The present study replicated recent work demonstrating that judgments of learning (JOLs) do indeed reflect the superior memorability of words with emotional content. We further contrasted two hypotheses regarding this effect: a physiological account in which emotional words are judged to be more memorable because of their arousing properties, versus a cognitive account in which emotional words are judged to be more memorable because of their cognitive distinctiveness. Two results supported the latter account. First, both normed arousal (Exp. 1) and normed valence (Exp. 2) independently influenced JOLs, even though only an effect of arousal would be expected under a physiological account. Second, emotional content no longer influenced JOLs in a design (Exp. 3) that reduced the primary distinctiveness of emotional words by using a single list of words in which normed valence and arousal were varied continuously. These results suggest that the metamnemonic benefit of emotional words likely stems from cognitive factors

    Familiar Strategies Feel Fluent: The Role of Study Strategy Familiarity in the Misinterpreted-Effort Model of Self-Regulated Learning

    No full text
    Why do learners not choose ideal study strategies when learning? Past research suggests that learners frequently misinterpret the effort affiliated with efficient strategies as being indicative of poor learning. Expanding on past findings, we explored the integration of study habits into this model. We conducted two experiments where learners experienced two contrasting strategies—blocked and interleaved schedules—to learn to discriminate between images of bird families. After experiencing each strategy, learners rated each according to its perceived effort, learning, and familiarity. Next, learners were asked to choose which strategy they would use in the future. Mediation analyses revealed, for both experiments, that the more mentally effortful interleaving felt, the less learners felt they learned, and the less likely learners were to use it in future learning. Further, in this study, strategy familiarity predicted strategy choice, also mediated by learners’ perceived learning. Additionally, Study 2 verified that, in contrast to learners’ judgments, the less familiar interleaving schedule resulted in better learning. Consequently, learners are making ineffective learning judgments based on their perceptions of effort and familiarity and, therefore, do not make use of optimal study strategies in self-regulated learning decisions

    S1 Data -

    No full text
    (CSV)</p

    Linear mixed-effects regression results for model of correctly retyped items (N = 65, obs = 4854).

    No full text
    Linear mixed-effects regression results for model of correctly retyped items (N = 65, obs = 4854).</p

    Mean grammaticality ratings (model-estimated) as a function of item type and speaker type.

    No full text
    Error bars represent the standard error of the model-estimated marginal mean. Please note that because the means are model-estimated, values in the Unprimed condition are nearly, but not quite identical across both analyses (despite the underlying data being the same).</p
    corecore