42 research outputs found

    Controlled and automatic processes in Pavlovian-instrumental transfer

    Get PDF
    The current research aimed to further current knowledge on the psychological processes that underpin human outcome-selective Pavlovian-instrumental transfer (PIT) effects. PIT reflects the capacity of a Pavlovian stimulus to selectively potentiate an instrumental response that predicts a common rewarding outcome. PIT effects are often suggested to reflect a relatively automatic S-O-R mechanism, where the stimulus activates the sensory properties of the outcome, which then automatically triggers associated instrumental responses. The current research tested this S-O-R account of PIT against a propositional expected utility theory, which suggests that PIT effects reflect verbalizable inferences about the probability and value of each outcome. Chapter 1 reviews the relevant literature. Chapters 2-4 then report 11 experiments that aimed to set the S-O-R and propositional theories against one another. In Chapter 2, two experiments demonstrated that PIT is sensitive to a reversal instruction (Experiment 2), but is robust against a time pressure (Experiment 1) and concurrent load (Experiment 2) manipulation. Chapter 3 details the development of a novel outcome devaluation procedure, and reports four experiments that examined the effect of both outcome devaluation and verbal instructions on PIT. These experiments demonstrated that a typical PIT procedure produces PIT effects that are insensitive to a very strong devaluation manipulation. Furthermore, PIT effects were observed for a devalued outcome even when an S-O-R mechanism was unlikely to control behaviour. Chapter 4 reports five experiments that show that PIT is highly sensitive to outcome devaluation when multiple outcomes and responses are cued on every transfer test trial. Chapter 5 therefore concludes that, on balance, the results provide converging support for the propositional expected utility theory of PIT

    Pre-testing effects are target-specific and are not driven by a generalised state of curiosity

    Get PDF
    Guessing an answer to an unfamiliar question prior to seeing the answer leads to better memory than studying alone (the pre-testing effect), which some theories attribute to increased curiosity. A similar effect occurs in general knowledge learning: people are more likely to recall information that they were initially curious to learn. Gruber and Ranganath [(2019). How curiosity enhances hippocampus-dependent memory: The prediction, appraisal, curiosity, and exploration (PACE) framework. Trends in Cognitive Sciences, 23(12), 1014–1025] argued that unanswered questions can cause a state of curiosity during which encoding is enhanced for the missing answer, but also for incidental information presented at the time. If pre-testing similarly induces curiosity, then it too should produce better memory for incidental information. We tested this idea in three experiments that varied the order, nature and timing of the incidental material presented within a pre-testing context. All three experiments demonstrated a reliable pre-testing effect for the targets, but no benefit for the incidental material presented before the target. This pattern suggests that the pre-testing effect is highly specific and is not consistent with a generalised state of curiosity

    ply125

    No full text
    Experiment 1 in Seabrooke, T., Wills, A. J., Hogarth, L., & Mitchell, C. J. (2019). Automaticity and cognitive control: Effects of cognitive load on cue-controlled reward choice. Quarterly Journal of Experimental Psychology, 72, 1507–1521. https://doi.org/10.1177/174702181879705

    ply126

    No full text
    Experiment 2 in Seabrooke, T., Wills, A. J., Hogarth, L., & Mitchell, C. J. (2019). Automaticity and cognitive control: Effects of cognitive load on cue-controlled reward choice. Quarterly Journal of Experimental Psychology, 72, 1507–1521. https://doi.org/10.1177/174702181879705

    Dataset in support of the Southampton doctoral thesis ' Systematic review and investigation of Judgment of Learning (JoL) reactivity'

    No full text
    This dataset includes the following: - The .csv file of the participant&#39;s anonymous data following experiment completion - The .csv file of the word stimuli used in the experiment - The four HTML programmes used in the experiment (requires a programme to run HTML) - The R script used to analyse the data (requires Rstudio to run the script) - The participant information sheet and consent form</span

    Pretesting boosts item but not source memory. [Exp012, Exp022].

    No full text
    Experimental programs, data and R analysis scripts for "Pretesting boosts item but not source memory". Accepted at Memory (September 2021)

    Effects of Inductive Learning and Gamification on News Veracity Discernment

    No full text
    This pre-registered study tests a novel psychological intervention to improve news veracity discernment. The main intervention involved inductive learning (IL) training (i.e., practice discriminating between multiple true and fake news exemplars with feedback) with or without gamification. Participants (N = 282 Prolific users) were randomly assigned to either a gamified IL intervention, a non-gamified version of the same IL intervention, a no-treatment control group, or a Bad News intervention, a notable web-based game designed to tackle online misinformation. Following the intervention (if applicable), all participants rated the veracity of a novel set of news headlines. We hypothesized that the gamified intervention would be the most effective for improving news veracity discernment, followed by its non-gamified equivalent, then Bad News, and finally the control group. The results were analyzed with receiver operating characteristic curve analyses, which has previously never been applied to news veracity discernment. The analyses indicated that there were no significant differences between conditions and the Bayes factor indicated very strong evidence for the null. This finding raises questions about the effectiveness of current psychological interventions and contradicts prior research that has supported the efficacy of Bad News. Age, gender, and political leaning all predicted news veracity discernment

    Mean rating difference scores are poor measures of discernment: the role of response criteria

    No full text
    Many interventions aim to protect people from misinformation. Here, we review common measures used to assess their efficacy. Some measures only assess the target behavior (e.g., ability to spot misinformation) and therefore cannot determine whether interventions have overly general effects (e.g., erroneously identifying accurate information as misinformation). Better measures assess discernment, the ability to discriminate target from non-target content. Discernment can determine whether interventions are overly general but is often measured by comparing differences in mean ratings between target and non-target content. We show how this measure is confounded by the configuration of response criteria, leading researchers to incorrectly conclude that an intervention improves discernment. We recommend using measures from signal detection theory, such as the area under the receiver operating characteristic curve, to assess discernment

    Effects of inductive learning and gamification on news veracity discernment

    No full text
    This pre-registered study tests a novel psychological intervention to improve news veracity discernment. The main intervention involved inductive learning (IL) training (i.e., practice discriminating between multiple true and fake news exemplars with feedback) with or without gamification. Participants (N = 282 Prolific users) were randomly assigned to either a gamified IL intervention, a non-gamified version of the same IL intervention, a no-treatment control group, or a Bad News intervention, a notable web-based game designed to tackle online misinformation. Following the intervention (if applicable), all participants rated the veracity of a novel set of news headlines. We hypothesized that the gamified intervention would be the most effective for improving news veracity discernment, followed by its non-gamified equivalent, then Bad News, and finally the control group. The results were analyzed with receiver operating characteristic curve analyses, which has previously never been applied to news veracity discernment. The analyses indicated that there were no significant differences between conditions and the Bayes factor indicated very strong evidence for the null. This finding raises questions about the effectiveness of current psychological interventions and contradicts prior research that has supported the efficacy of Bad News. Age, gender, and political leaning all predicted news veracity discernment
    corecore