45,458 research outputs found

    Human spontaneous gaze patterns in viewing of faces of different species

    Get PDF
    Human studies have reported clear differences in perceptual and neural processing of faces of different species, implying the contribution of visual experience to face perception. Can these differences be manifested in our eye scanning patterns while extracting salient facial information? Here we systematically compared non-pet owners’ gaze patterns while exploring human, monkey, dog and cat faces in a passive viewing task. Our analysis revealed that the faces of different species induced similar patterns of fixation distribution between left and right hemi-face, and among key local facial features with the eyes attracting the highest proportion of fixations and viewing times, followed by the nose and then the mouth. Only the proportion of fixation directed at the mouth region was species-dependent and could be differentiated at the earliest stage of face viewing. It seems that our spontaneous eye scanning patterns associated with face exploration were mainly constrained by general facial configurations; the species affiliation of the inspected faces had limited impact on gaze allocation, at least under free viewing conditions

    Consistent left gaze bias in processing different facial cues

    Get PDF
    While viewing faces, humans often demonstrate a natural gaze bias towards the left visual field, that is, the right side of the viewee’s face is often inspected first and for longer periods. Previous studies have suggested that this gaze asymmetry is part of the gaze pattern associated with face exploration, but its relation with perceptual processing of facial cues is unclear. In this study we recorded participants’ saccadic eye movements while exploring face images under different task instructions (free-viewing, judging familiarity and judging facial expression). We observed a consistent left gaze bias in face viewing irrespective of task demands. The probability of the first fixation and the proportion of overall fixations directed at the left hemiface were indistinguishable across different task instructions or across different facial expressions. It seems that the left gaze bias is an automatic reflection of hemispheric lateralisation in face processing, and is not necessarily correlated with the perceptual processing of a specific type of facial information

    Learning Generative Models with Visual Attention

    Full text link
    Attention has long been proposed by psychologists as important for effectively dealing with the enormous sensory stimulus available in the neocortex. Inspired by the visual attention models in computational neuroscience and the need of object-centric data for generative models, we describe for generative learning framework using attentional mechanisms. Attentional mechanisms can propagate signals from region of interest in a scene to an aligned canonical representation, where generative modeling takes place. By ignoring background clutter, generative models can concentrate their resources on the object of interest. Our model is a proper graphical model where the 2D Similarity transformation is a part of the top-down process. A ConvNet is employed to provide good initializations during posterior inference which is based on Hamiltonian Monte Carlo. Upon learning images of faces, our model can robustly attend to face regions of novel test subjects. More importantly, our model can learn generative models of new faces from a novel dataset of large images where the face locations are not known.Comment: In the proceedings of Neural Information Processing Systems, 201

    The interaction between gaze and facial expression in the amygdala and extended amygdala is modulated by anxiety

    Get PDF
    Behavioral evidence indicates that angry faces are seen as more threatening, and elicit greater anxiety, when directed at the observer, whereas the influence of gaze on the processing of fearful faces is less consistent. Recent research has also found inconsistent effects of expression and gaze direction on the amygdala response to facial signals of threat. However, such studies have failed to consider the important influence of anxiety on the response to signals of threat; an influence that is well established in behavioral research and recent neuroimaging studies. Here, we investigated the way in which individual differences in anxiety would influence the interactive effect of gaze and expression on the response to angry and fearful faces in the human extended amygdala. Participants viewed images of fearful, angry and neutral faces, either displaying an averted or direct gaze. We found that state anxiety predicted an increased response in the dorsal amygdala/substantia innominata (SI) to angry faces when gazing at, relative to away from the observer. By contrast, high state anxious individuals showed an increased amygdala response to fearful faces that was less dependent on gaze. In addition, the relationship between state anxiety and gaze on emotional intensity ratings mirrored the relationship between anxiety and the amygdala/SI response. These results have implications for understanding the functional role of the amygdala and extended amygdala in processing signals of threat, and are consistent with the proposed role of this region in coding the relevance or significance of a stimulus to the observer

    Holistic gaze strategy to categorize facial expression of varying intensities

    Get PDF
    Using faces representing exaggerated emotional expressions, recent behaviour and eye-tracking studies have suggested a dominant role of individual facial features in transmitting diagnostic cues for decoding facial expressions. Considering that in everyday life we frequently view low-intensity expressive faces in which local facial cues are more ambiguous, we probably need to combine expressive cues from more than one facial feature to reliably decode naturalistic facial affects. In this study we applied a morphing technique to systematically vary intensities of six basic facial expressions of emotion, and employed a self-paced expression categorization task to measure participants’ categorization performance and associated gaze patterns. The analysis of pooled data from all expressions showed that increasing expression intensity would improve categorization accuracy, shorten reaction time and reduce number of fixations directed at faces. The proportion of fixations and viewing time directed at internal facial features (eyes, nose and mouth region), however, was not affected by varying levels of intensity. Further comparison between individual facial expressions revealed that although proportional gaze allocation at individual facial features was quantitatively modulated by the viewed expressions, the overall gaze distribution in face viewing was qualitatively similar across different facial expressions and different intensities. It seems that we adopt a holistic viewing strategy to extract expressive cues from all internal facial features in processing of naturalistic facial expressions

    Discrimination of human and dog faces and inversion responses in domestic dogs (Canis familiaris)

    Get PDF
    Although domestic dogs can respond to many facial cues displayed by other dogs and humans, it remains unclear whether they can differentiate individual dogs or humans based on facial cues alone and, if so, whether they would demonstrate the face inversion effect, a behavioural hallmark commonly used in primates to differentiate face processing from object processing. In this study we first established the applicability of the Visual Paired Comparison (VPC or preferential looking) procedure for dogs using a simple object discrimination task with 2D pictures. The animals demonstrated a clear looking preference for novel objects when simultaneously presented with prior-exposed familiar objects. We then adopted this VPC procedure to assess their face discrimination and inversion responses. Dogs showed a deviation from random behaviour, indicating discrimination capability when inspecting upright dog faces, human faces and object images; but the pattern of viewing preference was dependent upon image category. They directed longer viewing time at novel (vs. familiar) human faces and objects, but not at dog faces, instead, a longer viewing time at familiar (vs. novel) dog faces was observed. No significant looking preference was detected for inverted images regardless of image category. Our results indicate that domestic dogs can use facial cues alone to differentiate individual dogs and humans, and that they exhibit a non-specific inversion response. In addition, the discrimination response by dogs of human and dog faces appears to differ with the type of face involved
    • …
    corecore