786 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Visual scanning patterns and executive function in relation to facial emotion recognition in aging

    Full text link
    OBJECTIVE: The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. METHODS: We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. RESULTS: OA were less accurate than YA at identifying fear (p < .05, r = .44) and more accurate at identifying disgust (p < .05, r = .39). OA fixated less than YA on the top half of the face for disgust, fearful, happy, neutral, and sad faces (p values < .05, r values ≥ .38), whereas there was no group difference for landscapes. For OA, executive function was correlated with recognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. CONCLUSION: We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition.Accepted manuscrip

    Discovering Gender Differences in Facial Emotion Recognition via Implicit Behavioral Cues

    Full text link
    We examine the utility of implicit behavioral cues in the form of EEG brain signals and eye movements for gender recognition (GR) and emotion recognition (ER). Specifically, the examined cues are acquired via low-cost, off-the-shelf sensors. We asked 28 viewers (14 female) to recognize emotions from unoccluded (no mask) as well as partially occluded (eye and mouth masked) emotive faces. Obtained experimental results reveal that (a) reliable GR and ER is achievable with EEG and eye features, (b) differential cognitive processing especially for negative emotions is observed for males and females and (c) some of these cognitive differences manifest under partial face occlusion, as typified by the eye and mouth mask conditions.Comment: To be published in the Proceedings of Seventh International Conference on Affective Computing and Intelligent Interaction.201

    Foveal processing of emotion-informative facial features

    Get PDF
    Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow led to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 2b) and when briefly presented at the mouth (Experiment 2a). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour

    Visual strategies underpinning social cognition in traumatic brain injury

    Get PDF
    Impairments in social cognition after traumatic brain injury (TBI) are well documented but poorly understood (McDonald, 2013). Deficits in emotion perception, particularly facial affect recognition, are frequently reported in the literature (Babbage et al., 2011; Knox & Douglas, 2009), as well as mentalizing impairments and difficulty in understanding sincere and sarcastic exchanges (Channon, Pellijeff & Rule, 2005). To fully understand social impairments, both low-level and high-level processes must be explored. Few studies have focused on low-level perceptual processes in regards to facial affect recognition after TBI, and those that do typically use static social stimuli which lack ecological validity (Alves, 2013). This thesis employed eyetracking technology to explore the visual strategies underpinning the processing of contemporary static and dynamic social cognition tasks in a group of 18 TBI participants and 18 age, gender and education matched controls. The group affected by TBI scored significantly lower on the Movie for the Assessment of Social Cognition (MASC; Dziobek, et al., 2006), the Amsterdam Dynamic Facial Expression Set (ADFES; van der Schalk, Hawk, Fischer & Doosje, 2009), and The Assessment of Social Inference Test (McDonald et al., 2003). These findings suggest that, across a range of reliable assessments, individuals with TBI displayed significant social cognition deficits, including emotion perception and theory of mind, thus presenting strong evidence that social cognition is altered post-TBI. Impairments were not related to low-level visual processing as measured through eye-tracking metrics. This important insight suggests that social cognition changes post-TBI is likely associated with impairments in higher-level cognitive functioning. Interestingly, the group with TBI did display some aberrant fixation patterns in response to one static and one dynamic task but gaze patterns were similar between the groups on the remaining tasks. These non-uniform results warrant further exploration of low-level alterations post-TBI. Findings are discussed in reference to academic and clinical implications

    Gaze-cueing of attention: Visual attention, social cognition and individual differences

    Get PDF
    During social interactions, people's eyes convey a wealth of information about their direction of attention and their emotional and mental states. This review aims to provide a comprehensive overview of past and current research into the perception of gaze behavior and its effect on the observer. This encompasses the perception of gaze direction and its influence on perception of the other person, as well as gaze-following behavior such as joint attention, in infant, adult, and clinical populations. Particular focus is given to the gaze-cueing paradigm that has been used to investigate the mechanisms of joint attention. The contribution of this paradigm has been significant and will likely continue to advance knowledge across diverse fields within psychology and neuroscience. (PsycINFO Database Record (c) 2009 APA, all rights reserved) (journal abstract

    The Impact on Emotion Classification Performance and Gaze Behavior of Foveal versus Extrafoveal Processing of Facial Features

    Get PDF
    At normal interpersonal distances all features of a face cannot fall within one’s fovea simultaneously. Given that certain facial features are differentially informative of different emotions, does the ability to identify facially expressed emotions vary according to the feature fixated and do saccades preferentially seek diagnostic features? Previous findings are equivocal. We presented faces for a brief time, insufficient for a saccade, at a spatial position that guaranteed that a given feature – an eye, cheek, the central brow, or mouth – fell at the fovea. Across two experiments, observers were more accurate and faster at discriminating angry expressions when the high spatial-frequency information of the brow was projected to their fovea than when one or other cheek or eye was. Performance in classifying fear and happiness (Experiment 1) was not influenced by whether the most informative features (eyes and mouth, respectively) were projected foveally or extrafoveally. Observers more accurately distinguished between fearful and surprised expressions (Experiment 2) when the mouth was projected to the fovea. Reflexive first saccades tended towards the left and center of the face rather than preferentially targeting emotion-distinguishing features. These results reflect the integration of task-relevant information across the face constrained by the differences between foveal and extrafoveal processing (Peterson & Eckstein, 2012)
    • …
    corecore