32,997 research outputs found

    Redundancy effects in the processing of emotional faces

    Get PDF
    AbstractHow does the visual system represent the ensemble statistics of visual objects? This question has received intense interest in vision research, yet most studies have focused on the extraction of mean statistics rather than its dispersion. This study focuses on another aspect of ensemble statistics: the redundancy of the sample. In two experiments, participants were faster judging the facial expression and gender of multiple faces than a single face. The redundancy gain was equivalent for multiple identical faces and for multiple faces of different identities. To test whether the redundancy gain was due to increased strength in perceptual representation, we measured the magnitude of facial expression aftereffects. The aftereffects were equivalent when induced by a single face and by four identical faces, ruling out increased perceptual strength as an explanation for the redundancy gain. We conclude that redundant faces facilitate perception by enhancing the robustness of representation of each face

    EMPATH: A Neural Network that Categorizes Facial Expressions

    Get PDF
    There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of "categorical perception." In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, "surprise" expressions lie between "happiness" and "fear" expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain

    After-effects and the reach of perceptual content

    Get PDF
    In this paper, I discuss the use of after-effects as a criterion for showing that we can perceive high-level properties. According to this criterion, if a high-level property is susceptible to after-effects, this suggests that the property can be perceived, rather than cognized. The defenders of the criterion claim that, since after-effects are also present for low-level, uncontroversially perceptual properties, we can safely infer that high-level after-effects are perceptual as well. The critics of the criterion, on the other hand, assimilate it to superficially similar effects in cognition and argue that the after-effect criterion is a cognitive phenomenon rather than a perceptual one, and that as a result it is not a reliable guide for exploring the contents of perception. I argue against both of these views and show that high-level after-effects cannot be identified either with low-level after-effects or with cognitive biases. I suggest an intermediate position: high-level after-effects are not cognitive, but they are nonetheless not a good criterion for exploring the contents of perception

    Dynamics of trimming the content of face representations for categorization in the brain

    Get PDF
    To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300

    Is it a face of woman or a man? Visual mismatch negativity is sensitive to gender category.

    Get PDF
    The present study investigated whether gender information for human faces was represented by the predictive mechanism indexed by the visual mismatch negativity (vMMN) event-related brain potential (ERP). While participants performed a continuous size-change-detection task, random sequences of cropped faces were presented in the background, in an oddball setting: either various female faces were presented infrequently among various male faces, or vice versa. In Experiment 1 the inter-stimulus-interval (ISI) was 400 ms, while in Experiment 2 the ISI was 2250 ms. The ISI difference had only a small effect on the P1 component, however the subsequent negativity (N1/N170) was larger and more widely distributed at longer ISI, showing different aspects of stimulus processing. As deviant-minus-standard ERP difference, a parieto-occipital negativity (vMMN) emerged in the 200–500 ms latency range (~350 ms peak latency in both experiments). We argue that regularity of gender on the photographs is automatically registered, and the violation of the gender category is reflected by the vMMN. In conclusion the results can be interpreted as evidence for the automatic activity of a predictive brain mechanism, in case of an ecologically valid category
    corecore