6,256 research outputs found
Specialization of neural mechanisms underlying face recognition in human infants
Newborn infants respond preferentially to simple face-like patterns, raising the possibility that the face-specific region, identified in the adult cortex are functioning from birth. We sought to evaluate this hypothesis by characterizing the specificity Of infants' electrocortical responses to faces in two ways: (1) comparing responses to faces of humans with those to faces of nonhuman primates; and 2) comparing responses to upright and inverted faces. Adults' face-responsive N170 event-related potential (ERP) component showed specificity to upright human faces that was not observable at any point in the ERPs Of infants. A putative "infant N170" did show sensitivity to the species of the face, but the orientation of the face did not influence processing until a later stage. These findings suggest a process of gradual specialization of cortical face processing systems during postnatal development
The face-sensitivity of the n170 component
Without abstrac
Dynamics of trimming the content of face representations for categorization in the brain
To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300
Implicit processing of the eyes and mouth: Evidence from human electrophysiology
The current study examined the time course of implicit processing of distinct facial features and the associate event-related potential (ERP) components. To this end, we used a masked priming paradigm to investigate implicit processing of the eyes and mouth in upright and inverted faces, using a prime duration of 33 ms. Two types of prime-target pairs were used: 1. congruent (e.g., open eyes only in both prime and target or open mouth only in both prime and target); 2. incongruent (e.g., open mouth only in prime and open eyes only in target or open eyes only in prime and open mouth only in target). The identity of the faces changed between prime and target. Participants pressed a button when the target face had the eyes open and another button when the target face had the mouth open. The behavioral results showed faster RTs for the eyes in upright faces than the eyes in inverted faces, the mouth in upright and inverted faces. Moreover they also revealed a congruent priming effect for the mouth in upright faces. The ERP findings showed a face orientation effect across all ERP components studied (P1, N1, N170, P2, N2, P3) starting at about 80 ms, and a congruency/priming effect on late components (P2, N2, P3), starting at about 150 ms. Crucially, the results showed that the orientation effect was driven by the eye region (N170, P2) and that the congruency effect started earlier (P2) for the eyes than for the mouth (N2). These findings mark the time course of the processing of internal facial features and provide further evidence that the eyes are automatically processed and that they are very salient facial features that strongly affect the amplitude, latency, and distribution of neural responses to faces
Time course and robustness of ERP object and face differences
Conflicting results have been reported about the earliest “true” ERP differences related to face processing, with the bulk of the literature focusing on the signal in the first 200 ms after stimulus onset. Part of the discrepancy might be explained by uncontrolled low-level differences between images used to assess the timing of face processing. In the present experiment, we used a set of faces, houses, and noise textures with identical amplitude spectra to equate energy in each spatial frequency band. The timing of face processing was evaluated using face–house and face–noise contrasts, as well as upright-inverted stimulus contrasts. ERP differences were evaluated systematically at all electrodes, across subjects, and in each subject individually, using trimmed means and bootstrap tests. Different strategies were employed to assess the robustness of ERP differential activities in individual subjects and group comparisons. We report results showing that the most conspicuous and reliable effects were systematically observed in the N170 latency range, starting at about 130–150 ms after stimulus onset
Parametric study of EEG sensitivity to phase noise during face processing
<b>Background: </b>
The present paper examines the visual processing speed of complex objects, here faces, by mapping the relationship between object physical properties and single-trial brain responses. Measuring visual processing speed is challenging because uncontrolled physical differences that co-vary with object categories might affect brain measurements, thus biasing our speed estimates. Recently, we demonstrated that early event-related potential (ERP) differences between faces and objects are preserved even when images differ only in phase information, and amplitude spectra are equated across image categories. Here, we use a parametric design to study how early ERP to faces are shaped by phase information. Subjects performed a two-alternative force choice discrimination between two faces (Experiment 1) or textures (two control experiments). All stimuli had the same amplitude spectrum and were presented at 11 phase noise levels, varying from 0% to 100% in 10% increments, using a linear phase interpolation technique. Single-trial ERP data from each subject were analysed using a multiple linear regression model.
<b>Results: </b>
Our results show that sensitivity to phase noise in faces emerges progressively in a short time window between the P1 and the N170 ERP visual components. The sensitivity to phase noise starts at about 120–130 ms after stimulus onset and continues for another 25–40 ms. This result was robust both within and across subjects. A control experiment using pink noise textures, which had the same second-order statistics as the faces used in Experiment 1, demonstrated that the sensitivity to phase noise observed for faces cannot be explained by the presence of global image structure alone. A second control experiment used wavelet textures that were matched to the face stimuli in terms of second- and higher-order image statistics. Results from this experiment suggest that higher-order statistics of faces are necessary but not sufficient to obtain the sensitivity to phase noise function observed in response to faces.
<b>Conclusion: </b>
Our results constitute the first quantitative assessment of the time course of phase information processing by the human visual brain. We interpret our results in a framework that focuses on image statistics and single-trial analyses
Atypical disengagement from faces and its modulation by the control of eye fixation in children with Autism Spectrum Disorder
By using the gap overlap task, we investigated disengagement from faces and objects in children (9–17 years old) with and without autism spectrum disorder (ASD) and its neurophysiological correlates. In typically developing (TD) children, faces elicited larger gap effect, an index of attentional engagement, and larger saccade-related event-related potentials (ERPs), compared to objects. In children with ASD, by contrast, neither gap effect nor ERPs differ between faces and objects. Follow-up experiments demonstrated that instructed fixation on the eyes induces larger gap effect for faces in children with ASD, whereas instructed fixation on the mouth can disrupt larger gap effect in TD children. These results suggest a critical role of eye fixation on attentional engagement to faces in both groups
It's all about timing : an electrophysiological examination of feedback-based learning with immediate and delayed feedback
Feedback regarding an individual's action can occur immediately or with a temporal delay. Processing of feedback that varies in its delivery time is proposed to engage different brain mechanisms. fMRI data implicate the striatum in the processing of immediate feedback, and the medial temporal lobe (MTL) in the processing of delayed feedback. The present study offers an electrophysiological examination of feedback processing in the context of timing, by studying the effects of feedback timing on the feedback-related negativity (FRN), a product of the midbrain dopamine system, and elucidating whether the N170 ERP component could capture MTL activation associated with the processing of delayed feedback. Participants completed a word-object paired association learning task; they received feedback 500 ms (immediate feedback condition) following a button press during the learning of two sets of 14 items, and at a delay of 6500 ms (delayed feedback condition) during the learning of the other two sets. The results indicated that while learning outcomes did not differ under the two timing conditions, Event Related Potential (ERPs) pointed to differential activation of the examined ERP components. FRN amplitude was found to be larger following the immediate feedback condition when compared with the delayed feedback condition, and sensitive to valence and learning only under the immediate feedback condition. Additionally, the amplitude of the N170 was found larger following the delayed feedback condition when compared with the immediate feedback condition. Taken together, the findings of the present study support the contention that the processing of delayed feedback involves a shift away from midbrain dopamine activation to the recruitment of the MTL
- …