26 research outputs found

    Prince, Rod. Haïti : Family Business. London (Engl.), Latin America Bureau, 1985, 91 p.

    Get PDF

    Preferential decoding of emotion from human non-linguistic vocalizations versus speech prosody

    Get PDF
    This study used event-related brain potentials (ERPs) to compare the time course of emotion processing from non-linguistic vocalizations versus speech prosody, to test whether vocalizations are treated preferentially by the neurocognitive system. Participants passively listened to vocalizations or pseudo-utterances conveying anger, sadness, or happiness as the EEG was recorded. Simultaneous effects of vocal expression type and emotion were analyzed for three ERP components (N100, P200, late positive component). Emotional vocalizations and speech were differentiated very early (N100) and vocalizations elicited stronger, earlier, and more differentiated P200 responses than speech. At later stages (450–700 ms), anger vocalizations evoked a stronger late positivity (LPC) than other vocal expressions, which was similar but delayed for angry speech. Individuals with high trait anxiety exhibited early, heightened sensitivity to vocal emotions (particularly vocalizations). These data provide new neurophysiological evidence that vocalizations, as evolutionarily primitive signals, are accorded precedence over speech-embedded emotions in the human voice

    Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    Get PDF
    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions

    Implicit and explicit object recognition at very large visual eccentricities : no improvement after loss of central vision.

    No full text
    International audienceLittle is known about the ability of human observers to process objects in the far periphery of their visual field and nothing about its evolution in case of central vision loss. We investigated implicit and explicit recognition at two large visual eccentricities. Pictures of objects were centred at 30° or 50° eccentricity. Implicit recognition was tested through a priming paradigm. Participants (normally sighted observers and people with 10-20 years of central vision loss) categorized pictures as animal/transport both in a study phase (Block 1) and in a test phase (Block 2). In explicit recognition participants decided for each picture presented in Block 2 whether it had been displayed in Block 1 (“yes”/“no”). Both visual (identical) and conceptual/lexical (same-name) priming occurred at 30° and at 50°. Explicit recognition was observed only at 30°. In people with central vision loss testing was only performed at 50° eccentricity. The pattern of results was similar to that of normally sighted observers but global performance was lower. The results suggest that vision, at large eccentricity, is mainly based on nonconscious coarse representations. Moreover, after 10-20 years of central vision loss, no evidence was found for an increased ability to use peripheral information in object recognition
    corecore