3 research outputs found

    Recall Me Maybe: The Effects of Music-Evoked Mood on Recognition Memory

    Get PDF
    The current study aims to further explore the relationship between musically evoked emotional states and recognition capabilities. Previous research has demonstrated emotional congruency between musical stimuli and subsequent task performance (Mitterschiffthaler et al., 2007). The background music’s emotional valence provides additional insight into how to guide the perception of events and how music-evoked emotions can impact memory (Scherer & Zentner, 2001; Hanser et al., 2015). For instance, happy people will have an easier time remembering positive experiences, rather than sad, or negatively valanced ones while those who are sad will better remember negative experiences, rather than happy, or positively valanced ones (Mayer et al., 1995). The current study consisted of 46 participants, recruited from Belmont Introductory Psychology courses. We hypothesized that participants who are induced to a positive emotional state will rate images more positively, while those in negative states will do the opposite. Additionally, we hypothesized that the recognition accuracy of positively valanced images will be higher for the positive group compared to the negative one, and vice versa. The implication of this study allows us to further understand the interplay of different factors, such as emotional states, and cognitive functioning. Results and discussion are forthcoming

    Picture That! EEG views imagery of your mind\u27s eye

    No full text
    Mental imagery involves perceptual reproduction without environmental input, providing new views of objects in space. While there are varying competencies, most engage parietal lobes. However, one cohort with imagery challenges, Aphantasia, reveals increased frontal lobe involvement. These individuals exhibit slower rotation reaction times (RT) and difficulty with facial recognition. Previous studies have predominantly used self-reports of deficits, leading to a void in concrete assessments. 49 undergraduates completed novel forced-choice judgments of dyad image orientations (same/mirrored), measuring ERPs, RT and accuracy. In block designs, mental rotation abilities were probed through 3 images (Blocks, Animals, Letters) and 4 Thatcher dyads (normal, manipulated images). Visualization skills were extracted through the VVIQ. Task validations mirror the literature. Overall, participants were slower in trials where one of the images was mirrored rather than same, p\u3c0.01. For the mental rotation RT, for blocks (4375ms) required longer reaction times versus letters (2176ms) and animals (2407ms). Accuracy was similar across all conditions (p\u3e0.05), indicating speed-for-accuracy compensations. In the Thatcher task, there was longer RT and decreased accuracy when both images were manipulated, compared to the other dyads (p\u3c0.05). Right-screwed VVIQ scores led to non-parametric analyses, with Spearmans not revealing associations between imagery vividness and task or EEG performance (p\u3e0.05). Prolonged frontal and parietal N200 latencies reflect slower access to visual properties of a stimulus, resulting in longer RT and decreased accuracy (p\u3c0.05). Higher frontal P300 amplitudes are engaging more cortical networks associated with task engagement and decision making, leading to increased accuracy (p\u3c0.05). Parietal and frontal regions associated with mental rotation ability reveal prolonged P300 latencies which may reflect impairments in mental rotation processing abilities, as evidence by longer RT and decreased accuracy (p\u3c0.05)

    Picture That! EEG views imagery of your mind\u27s eye

    No full text
    Mental imagery is a reproduction of perception without environmental input (Kosslyn, 2001). Mental rotations provide newly oriented views of objects in space, with varying skill competence in the general population (Peters, 2002) and functions are localized to the parietal regions (Kong et al., 2018). One condition displaying mental imagery deficits is Aphantasia, revealing slower reaction times for mental rotation (Pounder, 2018) as well as difficulty in facial recognition (Milton, 2021). In these individuals, mental rotation tasks mostly engage frontal neural regions (Zeman et al., 2010). Most studies have focused on self-reports, leading to a void in systematic assessments of skill, which is attempting to be addressed in the current study. During an EEG scanning session, participants (N=X; Mean age [SD]= X [X]) completed forced-choice assessments involving judgments of the same or mirrored images in a dyad pair, with the right image rotated along the XY plane. Outcome variables included reaction time and accuracy. Three stimulus types comprised the mental rotation task (blocks, letters, animals) and two facial stimuli comprised the Thatcher task (normal, manipulated). Outcome variables including behavioral markers of reaction time and accuracy as well as whole-brain ERP measures of N200 and P300 waves. A median split will divide the sample into high and low visualization cohorts (assessed by the VVIQ questionnaire). Statistical analyses will be examining differences between these groups on the above outcome variables. Data collection is ongoing, and results will be presented at the conference
    corecore