424 research outputs found

    The Impact on Emotion Classification Performance and Gaze Behavior of Foveal versus Extrafoveal Processing of Facial Features

    Get PDF
    At normal interpersonal distances all features of a face cannot fall within one’s fovea simultaneously. Given that certain facial features are differentially informative of different emotions, does the ability to identify facially expressed emotions vary according to the feature fixated and do saccades preferentially seek diagnostic features? Previous findings are equivocal. We presented faces for a brief time, insufficient for a saccade, at a spatial position that guaranteed that a given feature – an eye, cheek, the central brow, or mouth – fell at the fovea. Across two experiments, observers were more accurate and faster at discriminating angry expressions when the high spatial-frequency information of the brow was projected to their fovea than when one or other cheek or eye was. Performance in classifying fear and happiness (Experiment 1) was not influenced by whether the most informative features (eyes and mouth, respectively) were projected foveally or extrafoveally. Observers more accurately distinguished between fearful and surprised expressions (Experiment 2) when the mouth was projected to the fovea. Reflexive first saccades tended towards the left and center of the face rather than preferentially targeting emotion-distinguishing features. These results reflect the integration of task-relevant information across the face constrained by the differences between foveal and extrafoveal processing (Peterson & Eckstein, 2012)

    Expanding simulation models of emotional understanding: The case for different modalities, body-state simulation prominence and developmental trajectories

    Get PDF
    Recent models of emotion recognition suggest that when people perceive an emotional expression, they partially activate the respective emotion in themselves, providing a basis for the recognition of that emotion. Much of the focus of these models and of their evidential basis has been on sensorimotor simulation as a basis for facial expression recognition – the idea, in short, that coming to know what another feels involves simulating in your brain the motor plans and associated sensory representations engaged by the other person’s brain in producing the facial expression that you see. In this review article, we argue that simulation accounts of emotion recognition would benefit from three key extensions. First, that fuller consideration be given to simulation of bodily and vocal expressions, given that the body and voice are also important expressive channels for providing cues to another’s emotional state. Second, that simulation of other aspects of the perceived emotional state, such as changes in the autonomic nervous system and viscera, might have a more prominent role in underpinning emotion recognition than is typically proposed. Sensorimotor simulation models tend to relegate such body-state simulation to a subsidiary role, despite the plausibility of body-state simulation being able to underpin emotion recognition in the absence of typical sensorimotor simulation. Third, that simulation models of emotion recognition be extended to address how embodied processes and emotion recognition abilities develop through the lifespan. It is not currently clear how this system of sensorimotor and body-state simulation develops and in particular how this affects the development of emotion recognition ability. We review recent findings from the emotional body recognition literature and integrate recent evidence regarding the development of mimicry and interoception to significantly expand simulation models of emotion recognition

    Supramodal representations of perceived emotions in the human brain

    Get PDF
    Basic emotional states (such as anger, fear, and joy) can be similarly conveyed by the face, the body, and the voice. Are there human brain regions that represent these emotional mental states regardless of the sensory cues from which they are perceived? To address this question, in the present study participants evaluated the intensity of emotions perceived from face movements, body movements, or vocal intonations, while their brain activity was measured with functional magnetic resonance imaging (fMRI). Using multivoxel pattern analysis, we compared the similarity of response patterns across modalities to test for brain regions in which emotion-specific patterns in one modality (e.g., faces) could predict emotion-specific patterns in another modality (e.g. bodies). A whole-brain searchlight analysis revealed modality-independent but emotion category-specific activity patterns in medial prefrontal cortex (MPFC) and left superior temporal sulcus (STS). Multivoxel patterns in these regions contained information about the category of the perceived emotions (anger, disgust, fear, happiness, sadness) across all modality comparisons (face– body, face–voice, body–voice), and independently of the perceived intensity of the emotions. No systematic emotion-related differences were observed in the overall amplitude of activation in MPFC or STS. These results reveal supramodal representations of emotions in high-level brain areas previously implicated in affective processing, mental state attribution, and theory-of-mind. We suggest that MPFC and STS represent perceived emotions at an abstract, modality-independent level, and thus play a key role in the understanding and categorization of others’ emotional mental states

    Tuning the developing brain to emotional body expressions

    Get PDF
    Reading others’ emotional body expressions is an essential social skill. Adults readily recognize emotions from body movements. However, it is unclear when in development infants become sensitive to bodily expressed emotions. We examined event-related brain potentials (ERPs) in 4- and 8-month-old infants in response to point-light displays (PLDs) of happy and fearful body expressions presented in two orientations (upright and inverted). The ERP results revealed that 8-month-olds but not 4-month olds respond sensitively to the orientation and the emotion of the dynamic expressions. Specifically, 8-month-olds showed (i) an early (200–400 ms) orientation-sensitive positivity over frontal and central electrodes, and (ii) a late (700–1100 ms) emotion-sensitive positivity over temporal and parietal electrodes in the right hemisphere. These findings suggest that orientation-sensitive and emotion-sensitive brain processes, distinct in timing and topography, develop between 4 and 8 months of age

    Emotional modulation of body-selective visual areas

    Get PDF
    Emotionally expressive faces have been shown to modulate activation in visual cortex, including face-selective regions in ventral temporal lobe. Here we tested whether emotionally expressive bodies similarly modulate activation in body-selective regions. We show that dynamic displays of bodies with various emotional expressions, versus neutral bodies, produce significant activation in two distinct body-selective visual areas, the extrastriate body area (EBA) and the fusiform body area (FBA). Multi-voxel pattern analysis showed that the strength of this emotional modulation was related, on a voxel-by-voxel basis, to the degree of body selectivity, while there was no relation with the degree of selectivity for faces. Across subjects, amygdala responses to emotional bodies positively correlated with the modulation of body-selective areas. Together, these results suggest that emotional cues from body movements produce topographically selective influences on category-specific populations of neurons in visual cortex, and these increases may implicate discrete modulatory projections from the amygdala

    Discrimination of fearful and happy body postures in 8-month-old infants: an event-related potential study

    Get PDF
    Responding to others’ emotional body expressions is an essential social skill in humans. Adults readily detect emotions from body postures, but it is unclear whether infants are sensitive to emotional body postures. We examined 8-month-old infants’ brain responses to emotional body postures by measuring event-related potentials (ERPs) to happy and fearful bodies. Our results revealed two emotion-sensitive ERP components: body postures evoked an early N290 at occipital electrodes and a later Nc at fronto-central electrodes that were enhanced in response to fearful (relative to happy) expressions. These findings demonstrate that: (a) 8-month-old infants discriminate between static emotional body postures; and (b) similar to infant emotional face perception, the sensitivity to emotional body postures is reflected in early perceptual (N290) and later attentional (Nc) neural processes. This provides evidence for an early developmental emergence of the neural processes involved in the discrimination of emotional body postures

    Asymmetric interference between sex and emotion in face perception

    Get PDF
    Previous research with speeded-response interference tasks modeled on the Garner paradigm has demonstrated that task-irrelevant variations in either emotional expression or facial speech do not interfere with identity judgments, but irrelevant variations in identity do interfere with expression and facial speech judgments. Sex, like identity, is a relatively invariant aspect of faces. Drawing on a recent model of face processing according to which invariant and changeable aspects of faces are represented in separate neurological systems, we predicted asymmetric interference between sex and emotion classification. The results of Experiment 1, in which the Garner paradigm was employed, confirmed this prediction: Emotion classifications were influenced by the sex of the faces, but sex classifications remained relatively unaffected by facial expression. A second experiment, in which the difficulty of the tasks was equated, corroborated these findings, indicating that differences in processing speed cannot account for the asymmetric relationship between facial emotion and sex processing. A third experiment revealed the same pattern of asymmetric interference through the use of a variant of the Simon paradigm. To the extent that Garner interference and Simon interference indicate interactions at perceptual and response-selection stages of processing, respectively, a challenge for face processing models is to show how the same asymmetric pattern of interference could occur at these different stages. The implications of these findings for the functional independence of the different components of face processing are discussed

    The development of visually guided stepping

    Get PDF
    Adults use vision during stepping and walking to fine-tune foot placement. However, the developmental profile of visually guided stepping is unclear. We asked (1) whether children use online vision to fine-tune precise steps and (2) whether preci- sion stepping develops as part of broader visuomotor development, alongside other fundamental motor skills like reaching. With 6-(N = 11), 7-(N = 11), 8-(N = 11)-year-olds and adults (N = 15), we manipulated visual input during steps and reaches. Using motion capture, we measured step and reach error, and postural stability. We expected (1) both steps and reaches would be visually guided (2) with similar developmental profiles (3) foot placement biases that promote stability, and (4) correlations between postural stability and step error. Children used vision to fine-tune both steps and reaches. At all ages, foot placement was biased (albeit not in the predicted directions). Contrary to our predictions, step error was not correlated with postural stability. By 8 years, children’s step and reach error were adult-like. Despite similar visual control mechanisms, stepping and reaching had different developmental profiles: step error reduced with age whilst reach error was lower and stable with age. We argue that the development of both visually guided and non-visually guided action is limb-specific

    On non-QRT Mappings of the Plane

    Full text link
    We construct 9-parameter and 13-parameter dynamical systems of the plane which map bi-quadratic curves to other bi-quadratic curves and return to the original curve after two iterations. These generalize the QRT maps which map each such curve to itself. The new families of maps include those that were found as reductions of integrable lattices

    Spontaneous Chiral-Symmetry Breaking in Three-Dimensional QED with a Chern--Simons Term

    Full text link
    In three-dimensional QED with a Chern--Simons term we study the phase structure associated with chiral-symmetry breaking in the framework of the Schwinger--Dyson equation. We give detailed analyses on the analytical and numerical solutions for the Schwinger--Dyson equation of the fermion propagator, where the nonlocal gauge-fixing procedure is adopted to avoid wave-function renormalization for the fermion. In the absence of the Chern--Simons term, there exists a finite critical number of four-component fermion flavors, at which a continuous (infinite-order) chiral phase transition takes place and below which the chiral symmetry is spontaneously broken. In the presence of the Chern--Simons term, we find that the spontaneous chiral-symmetry-breaking transition continues to exist, but the type of phase transition turns into a discontinuous first-order transition. A simple stability argument is given based on the effective potential, whose stationary point gives the solution of the Schwinger-Dyson equation.Comment: 34 pages, revtex, with 9 postscriptfigures appended (uuencoded
    • …
    corecore