46 research outputs found

    Emotional Speech Perception Unfolding in Time: The Role of the Basal Ganglia

    Get PDF
    The basal ganglia (BG) have repeatedly been linked to emotional speech processing in studies involving patients with neurodegenerative and structural changes of the BG. However, the majority of previous studies did not consider that (i) emotional speech processing entails multiple processing steps, and the possibility that (ii) the BG may engage in one rather than the other of these processing steps. In the present study we investigate three different stages of emotional speech processing (emotional salience detection, meaning-related processing, and identification) in the same patient group to verify whether lesions to the BG affect these stages in a qualitatively different manner. Specifically, we explore early implicit emotional speech processing (probe verification) in an ERP experiment followed by an explicit behavioral emotional recognition task. In both experiments, participants listened to emotional sentences expressing one of four emotions (anger, fear, disgust, happiness) or neutral sentences. In line with previous evidence patients and healthy controls show differentiation of emotional and neutral sentences in the P200 component (emotional salience detection) and a following negative-going brain wave (meaning-related processing). However, the behavioral recognition (identification stage) of emotional sentences was impaired in BG patients, but not in healthy controls. The current data provide further support that the BG are involved in late, explicit rather than early emotional speech processing stages

    Hearing Feelings: Affective Categorization of Music and Speech in Alexithymia, an ERP Study

    Get PDF
    Background: Alexithymia, a condition characterized by deficits in interpreting and regulating feelings, is a risk factor for a variety of psychiatric conditions. Little is known about how alexithymia influences the processing of emotions in music and speech. Appreciation of such emotional qualities in auditory material is fundamental to human experience and has profound consequences for functioning in daily life. We investigated the neural signature of such emotional processing in alexithymia by means of event-related potentials. Methodology: Affective music and speech prosody were presented as targets following affectively congruent or incongruent visual word primes in two conditions. In two further conditions, affective music and speech prosody served as primes and visually presented words with affective connotations were presented as targets. Thirty-two participants (16 male) judged the affective valence of the targets. We tested the influence of alexithymia on cross-modal affective priming and on N400 amplitudes, indicative of individual sensitivity to an affective mismatch between words, prosody, and music. Our results indicate that the affective priming effect for prosody targets tended to be reduced with increasing scores on alexithymia, while no behavioral differences were observed for music and word targets. At the electrophysiological level, alexithymia was associated with significantly smaller N400 amplitudes in response to affectively incongruent music and speech targets, but not to incongruent word targets. Conclusions: Our results suggest a reduced sensitivity for the emotional qualities of speech and music in alexithymia during affective categorization. This deficit becomes evident primarily in situations in which a verbalization of emotional information is required

    Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    Get PDF
    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions

    Mindfulness training for adolescents: A neurodevelopmental perspective on investigating modifications in attention and emotion regulation using event-related brain potentials

    Get PDF

    Motor imagery data classification for BCI application using wavelet packet feature extraction

    Full text link
    The noninvasive brain imaging modalities have provided us an extraordinary means for monitoring the working brain. Among these modalities, Electroencephalography (EEG) is the most widely used technique for measuring the brain signals under different tasks, due to its mobility, low cost, and high temporal resolution. In this paper we investigate the use of EEG signals in brain-computer interface (BCI) systems

    外國文献

    Get PDF
    A Brain Computer Interface (BCI) is a device that allows the user to communicate with the world without utilizing voluntary muscle activity (i.e., using only the electrical activity of the brain). It makes use of the well-studied observation that the brain reacts differently to different stimuli, as a function of the level of attention allotted to the stimulus stream and the specific processing triggered by the stimulus. In this article we present a single trial independent component analysis (ICA) method that is working with a BCI system proposed by Farwell and Donchin. It can dramatically reduce the signal processing time and improve the data communicating rate. This ICA method achieved 76.67% accuracy on single trial P300 response identification
    corecore