12 research outputs found

    The role of contexts in face processing:Behavioral and ERP studies

    Get PDF

    An investigation into the emotion-cognition interaction and sub-clinical anxiety

    Get PDF
    This thesis combines behavioural and electrophysiological approaches in the study of the emotion-cognition interaction and sub-clinical anxiety. The research questions addressed in this thesis concern, specifically: the impact of emotion on attention; the interplay between attention and emotion in anxiety;and the cognitive construct of affect. Chapter 1 provides an introduction to emotion research, cognitive models of anxiety and motivates the thesis. Chapter 2 investigates whether affective processing is automatic. More specifically, to elucidate whether facilitated processing of threat in anxiety, evidenced by emotion-related ERP modulations, requires attentional resources. It was previously reported that emotional expression effects on ERP waveforms were completely eliminated when attention was directed away from emotional faces to other task-relevant locations (Eimer et al., 2003). However, Bishop et al. (2004) reported that threat-related stimuli can evoke amygdala activity without attentional engagement or conscious awareness in high-anxious but not low-anxious participants. Spatial attention was manipulated using a similar paradigm as Vuilleumier et al. (2001) and Holmes et al. (2003), to investigate the mechanism underlying the threat-related processing bias in anxiety by examining the influence of spatial attention and trait anxiety levels on established ERP modulations by emotional stimuli. Participants were instructed to match two peripheral faces or two peripheral Landolt squares. The Landolt squares task was selected since this is an attentionally demanding task and would likely consume most, if not all, attention resources. The ERP data did not offer support to the claim that affective stimuli are processed during unattended conditions in high-anxious but not low-anxious participants. Rather, it questions whether a preattentive processing bias for emotional faces is specific to heightened anxiety. This is based on the finding of an enhanced LPP response for threat/happy versus neutral faces and an enhanced slow wave for threat versus neutral faces, neither modulated by the focus of attention for both high and low anxiety groups. Chapter 3 investigated the delayed disengagement hypothesis proposed by Fox and colleagues (2001) as the mechanism underlying the threat-related attentional bias in anxiety. This was done by measuring N2pc and LRP latencies while participants performed an adapted version of the spatial cueing task.Stimuli consisted of a central affective image (either a face or IAPS picture, depending on condition) flanked to the left and right by a letter/number pair. Participants had to direct their attention to the left or right of a central affective image to make an orientation judgement of the letter stimulus. It was hypothesised that if threat-related stimuli are able to prolong attentional processing, N2pc onset should be delayed relative to the neutral condition. However, N2pc latency was not modulated by emotional valence of the central image, for either high or low anxiety groups. Thus, this finding does not provide support for the locus of the threat-related bias to the disengage component of attention. Chapter 4 further investigated the pattern of attentional deployment in the threat-related bias in anxiety. This was done by measuring task-switching ability between neutral and emotional tasks using an adapted version of Johnson’s (in press) attentional control capacity for emotional representations (ACCE) task. Participants performed either an emotional judgement or a neutral judgement task on a compound stimulus that consisted of an affective image (either happy versus fearful faces in the faces condition, or positive versus negative IAPS pictures in the IAPS condition) with a word located centrally across the image (real word versus pseudo-word). Participants scoring higher in trait anxiety were faster to switch from a neutral to a threatening mental set. This improved ability to switch attention to the emotional judgement task when threatening faces are presented is in accordance with a hypervigilance theory of anxiety. However, this processing bias for threat in anxiety was only apparent for emotional faces and not affective scenes, despite the fact that pictures depicting aversive threat scenes were used (e.g., violence, mutilation). This is discussed in more detail with respect to the social significance of salient stimuli. Chapter 5 in a pair of experiments sought to investigate how affect is mentally represented and specifically questions whether affect is represented on the basis of a conceptual metaphor linking direction and affect. The data suggest that the vertical position metaphor underlies our understanding of the relatively abstract concept of affect and is implicitly active, where positive equates with ‘upwards’ and negative with ‘downwards’. Metaphor-compatible directional movements were demonstrated to facilitate response latencies, such that participants were relatively faster to make upward responses to positively-evaluated words and downward responses to negatively-evaluated words than to metaphorincompatible stimulus-response mappings. The finding suggests that popular use of linguistic metaphors depicting spatial representation of affect may reflect our underlying cognitive construct of the abstract concept of valence. Chapter 6 summarises the research in the thesis and implications of the present results are discussed, in particular in relation to cognitive models of anxiety. Areas of possible future research are provided

    Reading others' emotions: Evidence from event-related potentials

    Get PDF
    This Thesis aimed at investigating, by using the event-related potentials (ERPs) technique, some relevant aspects involved in human ability to read others’ emotions and in empathizing with others’ affective states. Social and affective neuroscience has largely studied faces and facial expressions since they represent relevant “pieces of information” in guiding individuals during interaction. Their importance is strictly related to the fact that they provide unique information about identity, gender, age, trustworthiness, and attractiveness, but they also convey emotions. In Chapter 1, I have introduced the reader to the contents of this Thesis, in particular the ability to “read” others’ facial expressions and to empathize with others’ affective states. In Chapter 2, I have offered an overview of knowledge available today on how humans process faces in general and facial expressions in particular. I have proposed a theoretical excursus starting from Bruce and Young’s cognitive model (1986) to a recent simulative model of recognition of emotional facial expressions by Wood and colleagues (2016), which considers facial mimicry helpful in discriminating between subtle emotions. In Chapter 3 and 4, I have presented two different studies (Experiments 1 and 2, respectively) strongly related to each other, since they aimed both at testing a functional link between the visual system and facial mimicry/sensorimotor simulation during the processing of facial expressions of emotions. I have described two different studies in which ERPs, by virtue of its high temporal resolution, allowed to track the time-course of the hypothesized influence of mimicry/simulation on the stages of visual analysis of facial expressions. The aim of Experiment 1 was to explore the potential connection between facial mimicry and the early stage of the construction of visual percepts of facial expressions; while the Experiment 2 investigated whether and how facial mimicry could interact with later stages of visual processing focusing on the construction of visual working memory representations of facial expressions of emotions, by also monitoring whether this process could depend on the degree of the observers’ empathy. For both studies, the results strongly suggest that mimicry may influence early and later stages of visual processing of faces and facial expressions. In the second part of my Thesis, I introduced the reader to the construct of empathy, dealing with its multifaceted nature and the role of different factors in the modulation of an empathic response, especially to others’ pain (Chapter 5). In Chapter 6 and 7, I have discussed two ERP studies (Experiments 3 and 4a) with one behavioral study included as a control study (Experiment 4b) to investigate the empathic reaction to others’ pain as a function of different variables that could play a role in daily life. Experiment 3 investigated the role of prosodic information in neural empathic responses to others’ pain. Results from this study demonstrated that prosodic information can enhance human ability to share others’ pain by acting transversely on the two main empathy components, the experience sharing and the mentalizing. The aim of Experiment 4a was to study whether the physical distance between an observer and an individual in a particular affective state, induced by a painful stimulation, is a critical factor in modulating the magnitude of an empathic neural reaction in the observer. Thus, by manipulating the perceived physical distance of face stimuli, I observed a moderating effect on empathic ERP reactions as a function of the perceived physical distance of faces. Results of Experiment 4b clarified that the critical factor triggering differential empathic reactions in the two groups in Experiment 4a was not related to the likelihood of identifying the faces of the two sizes but to the perceived physical distance. Finally, in Chapter 8, a general discussion highlights the main findings presented in this Thesis, by also providing future suggestions to extend the research on this topics debated in the previous Chapters

    What you see is what you feel:Top-down emotional effects in face detection

    Get PDF
    Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Furthermore, whilst emotional state has been found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. In four studies and a computational model, we investigated how emotions affect mental representations of faces and how facial representations could be used to transmit and communicate people’s emotional states. To this end, we used an adapted reverse correlation techniquesuggested by Gill et al. (2019) which was based on an earlier idea of the ‘Superstitious Approach’ (Gosselin & Schyns, 2003). In Experiment 1 we measured how naturally occurring anxiety and depression, caused by external factors, affected people’s mental representations of faces. In two sessions, on separate days, participants (coders) were presented with ‘colourful’ visual noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified by the coders as a face, we reconstructed the pictorial mental representation utilised by each participant in the identification process. Across coders, we found significant correlations between changes in the size of the mental representation of faces and changes in their level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations can reflect their emotional state, we conducted a validation study (Experiment 2) with a group of naïve participants (verifiers) who were asked to classify the reconstructed mental representations of faces by emotion. Thus, we assessed whether the mental representations communicate coders’ emotional states to others. The analysis showed no significant correlation between coders’ emotional states, depicted in their mental representation of faces and verifiers’ evaluation scores. In Experiment 3, we investigated how different induced moods, negative and positive, affected mental representation of faces. Coders underwent two different mood induction conditions during two separate sessions. They were presented with the same ‘colourful’ noise stimuli used in Experiment 1 and asked to detect faces. We were able to reconstruct pictorial mental representations of faces based on the identified fragments. The analysis showed a significant negative correlation between changes in coders’ mood along the dimension of arousal and changes in size of their mental representation of faces. Similar to Experiment 2, we conducted a validation study (Experiment 4) to investigate if coders’ mood could have been communicated to others through their mental representations of faces. Similarly, to Experiment 2, we found no correlation between coders’ mood, depicted in their mental representations of faces and verifiers’ evaluation of the intensity of transmitted emotional expression. Lastly, we tested a preliminary computational model (Experiment 5) to classify and predict coders’ emotional states based on their reconstructed mental representations of faces. In spite of the small number of training examples and the high dimensionality of the input, the model was successful just above chance level. Future studies should look at the possibility of improving the computational model by using a larger training set and testing other classifiers. Overall, the present work confirmed the presence of facial templates used during face detection. It provides an adapted version of a reverse correlation technique that can be used to access mental representation of faces, with a significant reduction in number of trials. Lastly, it provides evidence on how emotions can influence and affect the size of mental representations of faces

    Interaction between visual attention and the processing of visual emotional stimuli in humans : eye-tracking, behavioural and event-related potential experiments

    Get PDF
    Past research has shown that the processing of emotional visual stimuli and visual attention are tightly linked together. In particular, emotional stimuli processing can modulate attention, and, reciprocally, the processing of emotional stimuli can be facilitated or inhibited by attentional processes. However, our understanding of these interactions is still limited, with much work remaining to be done to understand the characteristics of this reciprocal interaction and the different mechanisms that are at play. This thesis presents a series of experiments which use eye-tracking, behavioural and event-related potential (ERP) methods in order to better understand these interactions from a cognitive and neuroscientific point of view. First, the influence of emotional stimuli on eye movements, reflecting overt attention, was investigated. While it is known that the emotional gist of images attracts the eye (Calvo and Lang, 2004), little is known about the influence of emotional content on eye movements in more complex visual environments. Using eye-tracking methods, and by adapting a paradigm originally used to study the influence of semantic inconsistencies in scenes (Loftus and Mackworth, 1978), we found that participants spend more time fixating emotional than neutral targets embedded in visual scenes, but do not fixate them earlier. Emotional targets in scenes were therefore found to hold, but not to attract, the eye. This suggests that due to the complexity of the scenes and the limited processing resources available, the emotional information projected extra-foveally is not processed in such a way that it drives eye movements. Next, in order to better characterise the exogenous deployment of covert attention toward emotional stimuli, a sample of sub-clinically anxious individuals was studied. Anxiety is characterised by a reflexive attentional bias toward threatening stimuli. A dot-probe task (MacLeod et al., 1986) was designed to replicate and extend past findings of this attentional bias. In particular, the experiment was designed to test whether the bias was caused by faster reaction times to fear-congruent probes or slower reaction times to neutral-congruent probes. No attentional bias could be measured. A further analysis of the literature suggests that subliminal cue stimulus presentation, as used in our case, may not generate reliable attentional biases, unlike longer cue presentations. This would suggest that while emotional stimuli can be processed without awareness, further processing may be necessary to trigger reflexive attentional shifts in anxiety. Then the time-course of emotional stimulus processes and its modulation by attention was investigated. Modulations of the very early visual ERP C1 component by emotional stimuli (e.g. Pourtois et al., 2004; Stolarova et al., 2006), but also by visual attention (Kelly et al., 2008), were reported in the literature. A series of three experiments were performed, investigating the interactions between endogenous covert spatial attention and object-based attention with emotional stimuli processing in the C1 time window (50–100 ms). It was found that emotional stimuli modulated the C1 only when they were spatially attended and task-irrelevant. This suggests that whilst spatial attention gates emotional facial processing from the earliest stages, only incidental processing triggers a specific response before 100 ms. Additionally, the results suggest a very early modulation by feature-based attention which is independent from spatial attention. Finally, simulated and actual electroencephalographic data were used to show that modulations of early ERP and event-related field (ERF) components are highly dependent on the high-pass filter used in the pre-processing stage. A survey of the literature found that a large part of ERP/ERF reports (about 40%) use high-pass filters that may bias the results. More particularly, a large proportion of papers reporting very early modulations also use such filters. Consequently, a large part of the literature may need to be re-assessed. The work described in this thesis contributes to a better understanding of the links between emotional stimulus processing and attention at different levels. Using various experimental paradigms, this work confirms that emotional stimuli processing is not ‘automated’, but highly dependent on the focus of attention, even at the earlier stages of visual processing. Furthermore, the uncovered potential bias generated by filtering will help to improve the reliability and precision of research in the ERP/ERF field, and more particularly in studies looking at early effects

    Research Topic: Typical and Atypical Processing of Gaze

    Get PDF

    The Role of Experience in the Organization and Refinement of Face Space

    Get PDF
    Adults code faces in reference to category-specific norms that represent the different face categories encountered in the environment (e.g., race, age). Reliance on such norm-based coding appears to aid recognition, but few studies have examined the development of separable prototypes and the way in which experience influences the refinement of the coding dimensions associated with different face categories. The present dissertation was thus designed to investigate the organization and refinement of face space and the role of experience in shaping sensitivity to its underlying dimensions. In Study 1, I demonstrated that face space is organized with regard to norms that reflect face categories that are both visually and socially distinct. These results provide an indication of the types of category-specific prototypes that can conceivably exist in face space. Study 2 was designed to investigate whether children rely on category-specific prototypes and the extent to which experience facilitates the development of separable norms. I demonstrated that unlike adults and older children, 5-year-olds rely on a relatively undifferentiated face space, even for categories with which they receive ample experience. These results suggest that the dimensions of face space undergo significant refinement throughout childhood; 5 years of experience with a face category is not sufficient to facilitate the development of separable norms. In Studies 3 through 5, I examined how early and continuous exposure to young adult faces may optimize the face processing system for the dimensions of young relative to older adult faces. In Study 3, I found evidence for a young adult bias in attentional allocation among young and older adults. However, whereas young adults showed an own-age recognition advantage, older adults exhibited comparable recognition for young and older faces. These results suggest that despite the significant experience that older adults have with older faces, the early and continuous exposure they received with young faces continues to influence their recognition, perhaps because face space is optimized for young faces. In Studies 4 and 5, I examined whether sensitivity to deviations from the norm is superior for young relative to older adult faces. I used normality/attractiveness judgments as a measure of this sensitivity; to examine whether biases were specific to norm-based coding, I asked participants to discriminate between the same faces. Both young and older adults were more accurate when tested with young relative to older faces—but only when judging normality. Like adults, 3- and 7-year-olds were more accurate in judging the attractiveness of young faces; however, unlike adults, this bias extended to the discrimination task. Thus by 3 years of age children are more sensitive to differences among young relative to older faces, suggesting that young children's perceptual system is more finely tuned for young than older adult faces. Collectively, the results of this dissertation help elucidate the development of category-specific norms and clarify the role of experience in shaping sensitivity to the dimensions of face space

    The effect of familiarity on face adaptation

    Get PDF
    Face adaptation techniques have been used extensively to investigate how faces are processed. It has even been suggested that face adaptation is functional in calibrating the visual system to the diet of faces to which an observer is exposed. Yet most adaptation studies to date have used unfamiliar faces: few have used faces with real world familiarity. Familiar faces have more abstractive representations than unfamiliar faces. The experiments in this thesis therefore examined face adaptation for familiar faces. Chapters 2 and 3 explored the role of explicit recognition of familiar faces in producing face identity after-effects (FIAEs). Chapter 2 used composite faces (the top half of a celebrity's face paired with the bottom half of an unfamiliar face) as adaptors and showed that only recognised composites produced significant adaptation. In Chapter 3 the adaptors were cryptic faces (unfamiliar faces subtly transformed towards a celebrity's face) and faces of celebrity's siblings. Unrecognised cryptic and sibling faces produced FIAEs for their related celebrity, but only when adapting and testing on the same viewpoint. Adaptation only transferred across viewpoint when a face was explicitly recognised. Chapter 4 demonstrated that face adaptation could occur for ecologically valid, personally familiar stimuli, a necessary pre-requisite if adaptation is functional in calibrating face processing mechanisms. A video of a lecturer's face produced FIAEs equivalent to that produced by static images. Chapters 5 and 6 used a different type of after-effect, the face distortion after-effect (FDAE), to explore the stability of our representations for personally familiar faces, and showed that even representations of highly familiar faces can be affected by exposure to distorted faces. The work presented here shows that it is important to take facial familiarity into account when investigating face adaptation effects, as well as increasing our understanding of how familiarity affects the representations of faces
    corecore