83 research outputs found

    Explicit attention interferes with selective emotion processing in human extrastriate cortex

    Get PDF
    BACKGROUND: Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN) for pleasant and unpleasant compared to neutral images (~150–300 ms poststimulus). The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. RESULTS: Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b). Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. CONCLUSION: The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task-relevant stimuli. The present data suggest that interference of emotion processing by competing task demands is a more general phenomenon extending to the domain of feature-based attention. Furthermore, the results are inconsistent with the notion of effortlessness, i.e., early emotion discrimination despite concurrent task demands. These findings implicate to assess the presumed automatic nature of emotion processing at the level of specific aspects rather than considering automaticity as an all-or-none phenomenon

    Illusory Stimuli Can Be Used to Identify Retinal Blind Spots

    Get PDF
    Background. Identification of visual field loss in people with retinal disease is not straightforward as people with eye disease are frequently unaware of substantial deficits in their visual field, as a consequence of perceptual completion ("filling-in'') of affected areas. Methodology. We attempted to induce a compelling visual illusion known as the induced twinkle after-effect (TwAE) in eight patients with retinal scotomas. Half of these patients experience filling-in of their scotomas such that they are unaware of the presence of their scotoma, and conventional campimetric techniques can not be used to identify their vision loss. The region of the TwAE was compared to microperimetry maps of the retinal lesion. Principal Findings. Six of our eight participants experienced the TwAE. This effect occurred in three of the four people who filled-in their scotoma. The boundary of the TwAE showed good agreement with the boundary of lesion, as determined by microperimetry. Conclusion. For the first time, we have determined vision loss by asking patients to report the presence of an illusory percept in blind areas, rather than the absence of a real stimulus. This illusory technique is quick, accurate and not subject to the effects of filling-in

    Decoding Face Information in Time, Frequency and Space from Direct Intracranial Recordings of the Human Brain

    Get PDF
    Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity) in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e.g., emotion) in lateral temporal cortex including the superior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG) from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces in ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained a higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60–150 Hz and below 30 Hz, and again better decoded in ventral than lateral temporal cortex. Task-relevant attention improved decoding accuracy more than10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus

    Perceptual Rivalry: Reflexes Reveal the Gradual Nature of Visual Awareness

    Get PDF
    Rivalry is a common tool to probe visual awareness: a constant physical stimulus evokes multiple, distinct perceptual interpretations (“percepts”) that alternate over time. Percepts are typically described as mutually exclusive, suggesting that a discrete (all-or-none) process underlies changes in visual awareness. Here we follow two strategies to address whether rivalry is an all-or-none process: first, we introduce two reflexes as objective measures of rivalry, pupil dilation and optokinetic nystagmus (OKN); second, we use a continuous input device (analog joystick) to allow observers a gradual subjective report. We find that the “reflexes” reflect the percept rather than the physical stimulus. Both reflexes show a gradual dependence on the time relative to perceptual transitions. Similarly, observers' joystick deflections, which are highly correlated with the reflex measures, indicate gradual transitions. Physically simulating wave-like transitions between percepts suggest piece-meal rivalry (i.e., different regions of space belonging to distinct percepts) as one possible explanation for the gradual transitions. Furthermore, the reflexes show that dominance durations depend on whether or not the percept is actively reported. In addition, reflexes respond to transitions with shorter latencies than the subjective report and show an abundance of short dominance durations. This failure to report fast changes in dominance may result from limited access of introspection to rivalry dynamics. In sum, reflexes reveal that rivalry is a gradual process, rivalry's dynamics is modulated by the required action (response mode), and that rapid transitions in perceptual dominance can slip away from awareness

    The role of the amygdala in face perception and evaluation

    Get PDF
    Faces are one of the most significant social stimuli and the processes underlying face perception are at the intersection of cognition, affect, and motivation. Vision scientists have had a tremendous success of mapping the regions for perceptual analysis of faces in posterior cortex. Based on evidence from (a) single unit recording studies in monkeys and humans; (b) human functional localizer studies; and (c) meta-analyses of neuroimaging studies, I argue that faces automatically evoke responses not only in these regions but also in the amygdala. I also argue that (a) a key property of faces represented in the amygdala is their typicality; and (b) one of the functions of the amygdala is to bias attention to atypical faces, which are associated with higher uncertainty. This framework is consistent with a number of other amygdala findings not involving faces, suggesting a general account for the role of the amygdala in perception

    Early Category-Specific Cortical Activation Revealed by Visual Stimulus Inversion

    Get PDF
    Visual categorization may already start within the first 100-ms after stimulus onset, in contrast with the long-held view that during this early stage all complex stimuli are processed equally and that category-specific cortical activation occurs only at later stages. The neural basis of this proposed early stage of high-level analysis is however poorly understood. To address this question we used magnetoencephalography and anatomically-constrained distributed source modeling to monitor brain activity with millisecond-resolution while subjects performed an orientation task on the upright and upside-down presented images of three different stimulus categories: faces, houses and bodies. Significant inversion effects were found for all three stimulus categories between 70–100-ms after picture onset with a highly category-specific cortical distribution. Differential responses between upright and inverted faces were found in well-established face-selective areas of the inferior occipital cortex and right fusiform gyrus. In addition, early category-specific inversion effects were found well beyond visual areas. Our results provide the first direct evidence that category-specific processing in high-level category-sensitive cortical areas already takes place within the first 100-ms of visual processing, significantly earlier than previously thought, and suggests the existence of fast category-specific neocortical routes in the human brain
    • 

    corecore