5,541 research outputs found

    The Effects of Parental Behavior on Infants' Neural Processing of Emotion Expressions

    Get PDF
    Infants become sensitive to emotion expressions early in the 1st year and such sensitivity is likely crucial for social development and adaptation. Social interactions with primary caregivers may play a key role in the development of this complex ability. This study aimed to investigate how variations in parenting behavior affect infants' neural responses to emotional faces. Event-related potentials (ERPs) to emotional faces were recorded from 40 healthy 7-month-old infants (24 males). Parental behavior was assessed and coded using the Emotional Availability Scales during free-play interaction. Sensitive parenting was associated with increased amplitudes to positive facial expressions on the face-sensitive ERP component, the negative central. Findings are discussed in relation to the interactive mechanisms influencing how infants neurally encode positive emotions

    Interpreting EEG and MEG signal modulation in response to facial features: the influence of top-down task demands on visual processing strategies

    Get PDF
    The visual processing of faces is a fast and efficient feat that our visual system usually accomplishes many times a day. The N170 (an Event-Related Potential) and the M170 (an Event-Related Magnetic Field) are thought to be prominent markers of the face perception process in the ventral stream of visual processing that occur ~ 170 ms after stimulus onset. The question of whether face processing at the time window of the N170 and M170 is automatically driven by bottom-up visual processing only, or whether it is also modulated by top-down control, is still debated in the literature. However, it is known from research on general visual processing, that top-down control can be exerted much earlier along the visual processing stream than the N170 and M170 take place. I conducted two studies, each consisting of two face categorization tasks. In order to examine the influence of top-down control on the processing of faces, I changed the task demands from one task to the next, while presenting the same set of face stimuli. In the first study, I recorded participantsā€™ EEG signal in response to faces while they performed both a Gender task and an Expression task on a set of expressive face stimuli. Analyses using Bubbles (Gosselin & Schyns, 2001) and Classification Image techniques revealed significant task modulations of the N170 ERPs (peaks and amplitudes) and the peak latency of maximum information sensitivity to key facial features. However, task demands did not change the information processing during the N170 with respect to behaviourally diagnostic information. Rather, the N170 seemed to integrate gender and expression diagnostic information equally in both tasks. In the second study, participants completed the same behavioural tasks as in the first study (Gender and Expression), but this time their MEG signal was recorded in order to allow for precise source localisation. After determining the active sources during the M170 time window, a Mutual Information analysis in connection with Bubbles was used to examine voxel sensitivity to both the task-relevant and the task-irrelevant face category. When a face category was relevant for the task, sensitivity to it was usually higher and peaked in different voxels than sensitivity to the task-irrelevant face category. In addition, voxels predictive of categorization accuracy were shown to be sensitive to task-relevant, behaviourally diagnostic facial features only. I conclude that facial feature integration during both N170 and M170 is subject to top-down control. The results are discussed against the background of known face processing models and current research findings on visual processing

    EEG frequency-tagging demonstrates increased left hemispheric involvement and crossmodal plasticity for face processing in congenitally deaf signers

    Get PDF
    In humans, face-processing relies on a network of brain regions predominantly in the right occipito-temporal cortex. We tested congenitally deaf (CD) signers and matched hearing controls (HC) to investigate the experience dependence of the cortical organization of face processing. Specifically, we used EEG frequency-tagging to evaluate: (1) Face-Object Categorization, (2) Emotional Facial-Expression Discrimination and (3) Individual Face Discrimination. The EEG was recorded to visual stimuli presented at a rate of 6Ā Hz, with oddball stimuli at a rate of 1.2Ā Hz. In all three experiments and in both groups, significant face discriminative responses were found. Face-Object categorization was associated to a relative increased involvement of the left hemisphere in CD individuals compared to HC individuals. A similar trend was observed for Emotional Facial-Expression discrimination but not for Individual Face Discrimination. Source reconstruction suggested a greater activation of the auditory cortices in the CD group for Individual Face Discrimination. These findings suggest that the experience dependence of the relative contribution of the two hemispheres as well as crossmodal plasticity vary with different aspects of face processing

    Normalization Among Heterogeneous Population Confers Stimulus Discriminability on the Macaque Face Patch Neurons

    Get PDF
    Primates are capable of recognizing faces even in highly cluttered natural scenes. In order to understand how the primate brain achieves face recognition despite this clutter, it is crucial to study the representation of multiple faces in face selective cortex. However, contrary to the essence of natural scenes, most experiments on face recognition literatures use only few faces at a time on a homogeneous background to study neural response properties. It thus remains unclear how face selective neurons respond to multiple stimuli, some of which might be encompassed by their receptive fields (RFs), others not. How is the neural representation of a face affected by the concurrent presence of other stimuli? Two lines of evidence lead to opposite predictions: first, given the importance of MAX-like operations for achieving selectivity and invariance, as suggested by feedforward circuitry for object recognition, face representations may not be compromised in the presence of clutter. On the other hand, the psychophysical crowding effect - the reduced discriminability (but not detectability) of an object in clutter - suggests that an object representation may be impaired by additional stimuli. To address this question, we conducted electrophysiological recordings in the macaque temporal lobe, where bilateral face selective areas are tightly interconnected to form a hierarchical face processing stream. Assisted by functional MRI, these face patches could be targeted for single-cell recordings. For each neuron, the most preferred face stimulus was determined, then presented at the center of the neuron\u27s RF. In addition, multiple stimuli (preferred or non-preferred) were presented in different numbers (0,1,2,4 or 8), from different categories (face or non-face object), or at different proximity (adjacent to or separated from the center stimulus). We found the majority of neurons reduced mean ring rates more (1) with increasing numbers of distractors, (2) with face distractors rather than with non-face object distractors, (3) at closer distractor proximity, and, additionally, (4) the response to multiple preferred faces depends on RF size. Although these findings in single neurons could indicate reduced discriminability, we found that each stimulus condition was well separated and decodable in a high-dimensional space spanned by the neural population. We showed that this was because neuronal population was quite heterogeneous, yet changing response systematically as stimulus parameter changed. Few neurons showed MAX-like behavior. These findings were explained by divisive normalization model, highlighting the importance of the modular structure of the primate temporal lobe. Taken together, these data and modeling results indicate that neurons in the face patches acquire stimulus discriminability by virtue of the modularity of cortical organization, heterogeneity within the population, and systematicity of the neural response

    Event-related potentials reveal preserved attention allocation but impaired emotion regulation in patients with epilepsy and comorbid negative affect

    Get PDF
    Patients with epilepsy have a high prevalence of comorbid mood disorders. This study aims to evaluate whether negative affect in epilepsy is associated with dysfunction of emotion regulation. Event-related potentials (ERPs) are used in order to unravel the exact electrophysiological time course and investigate whether a possible dysfunction arises during early (attention) and/or late (regulation) stages of emotion control. Fifty epileptic patients with (n = 25) versus without (n = 25) comorbid negative affect plus twenty-five matched controls were recruited. ERPs were recorded while subjects performed a face- or house-matching task in which fearful, sad or neutral faces were presented either at attended or unattended spatial locations. Two ERP components were analyzed: the early vertex positive potential (VPP) which is normally enhanced for faces, and the late positive potential (LPP) that is typically larger for emotional stimuli. All participants had larger amplitude of the early face-sensitive VPP for attended faces compared to houses, regardless of their emotional content. By contrast, in patients with negative affect only, the amplitude of the LPP was significantly increased for unattended negative emotional expressions. These VPP results indicate that epilepsy with or without negative affect does not interfere with the early structural encoding and attention selection of faces. However, the LPP results suggest abnormal regulation processes during the processing of unattended emotional faces in patients with epilepsy and comorbid negative affect. In conclusion, this ERP study reveals that early object-based attention processes are not compromised by epilepsy, but instead, when combined with negative affect, this neurological disease is associated with dysfunction during the later stages of emotion regulation. As such, these new neurophysiological findings shed light on the complex interplay of epilepsy with negative affect during the processing of emotional visual stimuli and in turn might help to better understand the etiology and maintenance of mood disorders in epilepsy

    Recognising Complex Mental States from Naturalistic Human-Computer Interactions

    Get PDF
    New advances in computer vision techniques will revolutionize the way we interact with computers, as they, together with other improvements, will help us build machines that understand us better. The face is the main non-verbal channel for human-human communication and contains valuable information about emotion, mood, and mental state. Affective computing researchers have investigated widely how facial expressions can be used for automatically recognizing affect and mental states. Nowadays, physiological signals can be measured by video-based techniques, which can also be utilised for emotion detection. Physiological signals, are an important indicator of internal feelings, and are more robust against social masking. This thesis focuses on computer vision techniques to detect facial expression and physiological changes for recognizing non-basic and natural emotions during human-computer interaction. It covers all stages of the research process from data acquisition, integration and application. Most previous studies focused on acquiring data from prototypic basic emotions acted out under laboratory conditions. To evaluate the proposed method under more practical conditions, two different scenarios were used for data collection. In the first scenario, a set of controlled stimulus was used to trigger the userā€™s emotion. The second scenario aimed at capturing more naturalistic emotions that might occur during a writing activity. In the second scenario, the engagement level of the participants with other affective states was the target of the system. For the first time this thesis explores how video-based physiological measures can be used in affect detection. Video-based measuring of physiological signals is a new technique that needs more improvement to be used in practical applications. A machine learning approach is proposed and evaluated to improve the accuracy of heart rate (HR) measurement using an ordinary camera during a naturalistic interaction with computer

    Recognising Complex Mental States from Naturalistic Human-Computer Interactions

    Get PDF
    New advances in computer vision techniques will revolutionize the way we interact with computers, as they, together with other improvements, will help us build machines that understand us better. The face is the main non-verbal channel for human-human communication and contains valuable information about emotion, mood, and mental state. Affective computing researchers have investigated widely how facial expressions can be used for automatically recognizing affect and mental states. Nowadays, physiological signals can be measured by video-based techniques, which can also be utilised for emotion detection. Physiological signals, are an important indicator of internal feelings, and are more robust against social masking. This thesis focuses on computer vision techniques to detect facial expression and physiological changes for recognizing non-basic and natural emotions during human-computer interaction. It covers all stages of the research process from data acquisition, integration and application. Most previous studies focused on acquiring data from prototypic basic emotions acted out under laboratory conditions. To evaluate the proposed method under more practical conditions, two different scenarios were used for data collection. In the first scenario, a set of controlled stimulus was used to trigger the userā€™s emotion. The second scenario aimed at capturing more naturalistic emotions that might occur during a writing activity. In the second scenario, the engagement level of the participants with other affective states was the target of the system. For the first time this thesis explores how video-based physiological measures can be used in affect detection. Video-based measuring of physiological signals is a new technique that needs more improvement to be used in practical applications. A machine learning approach is proposed and evaluated to improve the accuracy of heart rate (HR) measurement using an ordinary camera during a naturalistic interaction with computer
    • ā€¦
    corecore