63 research outputs found

    Network Interactions Explain Sensitivity to Dynamic Faces in the Superior Temporal Sulcus

    Get PDF
    The superior temporal sulcus (STS) in the human and monkey is sen-sitive to the motion of complex forms such as facial and bodily actions. We used functional magnetic resonance imaging (fMRI) to explore network-level explanations for how the form and motion information in dynamic facial expressions might be combined in the human STS. Ventral occipitotemporal areas selective for facial form were localized in occipital and fusiform face areas (OFA and FFA), and motion sensitivity was localized in the more dorsal temporal area V5. We then tested various connectivity models that modeled communication between the ventral form and dorsal motion path-ways. We show that facial form information modulated transmission of motion information from V5 to the STS, and that this face-selective modulation likely originated in OFA. This finding shows that form-selective motion sensitivity in the STS can be explained in terms of modulation of gain control on information flow in the motion pathway, and provides a substantial constraint for theories of the perception of faces and biological motion

    First report of generalized face processing difficulties in möbius sequence.

    Get PDF
    Reverse simulation models of facial expression recognition suggest that we recognize the emotions of others by running implicit motor programmes responsible for the production of that expression. Previous work has tested this theory by examining facial expression recognition in participants with Möbius sequence, a condition characterized by congenital bilateral facial paralysis. However, a mixed pattern of findings has emerged, and it has not yet been tested whether these individuals can imagine facial expressions, a process also hypothesized to be underpinned by proprioceptive feedback from the face. We investigated this issue by examining expression recognition and imagery in six participants with Möbius sequence, and also carried out tests assessing facial identity and object recognition, as well as basic visual processing. While five of the six participants presented with expression recognition impairments, only one was impaired at the imagery of facial expressions. Further, five participants presented with other difficulties in the recognition of facial identity or objects, or in lower-level visual processing. We discuss the implications of our findings for the reverse simulation model, and suggest that facial identity recognition impairments may be more severe in the condition than has previously been noted

    Adapting effects of emotional expression in anxiety: evidence for an enhanced late positive potential

    Get PDF
    An adaptation paradigm was used to investigate the influence of a previously experienced visual context on the interpretation of ambiguous emotional expressions. Affective classification of fear-neutral ambiguous expressions was performed following repeated exposure to either fearful or neutral faces. There was a shift in the behavioural classification of morphs towards ‘fear’ following adaptation to neutral compared to adaptation to fear with a non-significant trend towards the high anxiety group compared to the low being more influenced by the context. The event-related potential (ERP) data revealed a more pronounced late positive potential (LPP), beginning at ~400 ms post-stimulus onset, in the high but not the low anxiety group following adaptation to neutral compared to fear. In addition, as the size of the behavioural adaptation increased there was a linear increase in the magnitude of the late-LPP. However, context-sensitivity effects are not restricted to trait anxiety, with similar effects observed with state anxiety and depression. These data support the proposal that negative moods are associated with increased sensitivity to visual contextual influences from top-down elaborative modulations, as reflected in an enhanced late positive potential deflection

    Caucasian Infants Scan Own- and Other-Race Faces Differently

    Get PDF
    Young infants are known to prefer own-race faces to other race faces and recognize own-race faces better than other-race faces. However, it is entirely unclear as to whether infants also attend to different parts of own- and other-race faces differently, which may provide an important clue as to how and why the own-race face recognition advantage emerges so early. The present study used eye tracking methodology to investigate whether 6- to 10-month-old Caucasian infants (N = 37) have differential scanning patterns for dynamically displayed own- and other-race faces. We found that even though infants spent a similar amount of time looking at own- and other-race faces, with increased age, infants increasingly looked longer at the eyes of own-race faces and less at the mouths of own-race faces. These findings suggest experience-based tuning of the infant's face processing system to optimally process own-race faces that are different in physiognomy from other-race faces. In addition, the present results, taken together with recent own- and other-race eye tracking findings with infants and adults, provide strong support for an enculturation hypothesis that East Asians and Westerners may be socialized to scan faces differently due to each culture's conventions regarding mutual gaze during interpersonal communication

    Passive and Motivated Perception of Emotional Faces: Qualitative and Quantitative Changes in the Face Processing Network

    Get PDF
    Emotionally expressive faces are processed by a distributed network of interacting sub-cortical and cortical brain regions. The components of this network have been identified and described in large part by the stimulus properties to which they are sensitive, but as face processing research matures interest has broadened to also probe dynamic interactions between these regions and top-down influences such as task demand and context. While some research has tested the robustness of affective face processing by restricting available attentional resources, it is not known whether face network processing can be augmented by increased motivation to attend to affective face stimuli. Short videos of people expressing emotions were presented to healthy participants during functional magnetic resonance imaging. Motivation to attend to the videos was manipulated by providing an incentive for improved recall performance. During the motivated condition, there was greater coherence among nodes of the face processing network, more widespread correlation between signal intensity and performance, and selective signal increases in a task-relevant subset of face processing regions, including the posterior superior temporal sulcus and right amygdala. In addition, an unexpected task-related laterality effect was seen in the amygdala. These findings provide strong evidence that motivation augmentsco-activity among nodes of the face processing network and the impact of neural activity on performance. These within-subject effects highlight the necessity to consider motivation when interpreting neural function in special populations, and to further explore the effect of task demands on face processing in healthy brains

    Complementary neural representations for faces and words: A computational exploration

    Full text link

    2021 Taxonomic update of phylum Negarnaviricota (Riboviria: Orthornavirae), including the large orders Bunyavirales and Mononegavirales.

    Get PDF
    In March 2021, following the annual International Committee on Taxonomy of Viruses (ICTV) ratification vote on newly proposed taxa, the phylum Negarnaviricota was amended and emended. The phylum was expanded by four families (Aliusviridae, Crepuscuviridae, Myriaviridae, and Natareviridae), three subfamilies (Alpharhabdovirinae, Betarhabdovirinae, and Gammarhabdovirinae), 42 genera, and 200 species. Thirty-nine species were renamed and/or moved and seven species were abolished. This article presents the updated taxonomy of Negarnaviricota as now accepted by the ICTV
    • …
    corecore