4,317 research outputs found

    Event-Related Potentials and Emotion Processing in Child Psychopathology

    Get PDF
    In recent years there has been increasing interest in the neural mechanisms underlying altered emotional processes in children and adolescents with psychopathology. This review provides a brief overview of the most up-to-date findings in the field of Event-Related Potentials (ERPs) to facial and vocal emotional expressions in the most common child psychopathological conditions. In regards to externalising behaviour (i.e. ADHD, CD), ERP studies show enhanced early components to anger, reflecting enhanced sensory processing, followed by reductions in later components to anger, reflecting reduced cognitive-evaluative processing. In regards to internalising behaviour, research supports models of increased processing of threat stimuli especially at later more elaborate and effortful stages. Finally, in autism spectrum disorders abnormalities have been observed at early visual-perceptual stages of processing. An affective neuroscience framework for understanding child psychopathology can be valuable in elucidating underlying mechanisms and inform preventive intervention

    Suppressing sensorimotor activity modulates the discrimination of auditory emotions but not speaker identity

    Get PDF
    Our ability to recognize the emotions of others is a crucial feature of human social cognition. Functional neuroimaging studies indicate that activity in sensorimotor cortices is evoked during the perception of emotion. In the visual domain, right somatosensory cortex activity has been shown to be critical for facial emotion recognition. However, the importance of sensorimotor representations in modalities outside of vision remains unknown. Here we use continuous theta-burst transcranial magnetic stimulation (cTBS) to investigate whether neural activity in the right postcentral gyrus (rPoG) and right lateral premotor cortex (rPM) is involved in nonverbal auditory emotion recognition. Three groups of participants completed same-different tasks on auditory stimuli, discriminating between the emotion expressed and the speakers' identities, before and following cTBS targeted at rPoG, rPM, or the vertex (control site). A task-selective deficit in auditory emotion discrimination was observed. Stimulation to rPoG and rPM resulted in a disruption of participants' abilities to discriminate emotion, but not identity, from vocal signals. These findings suggest that sensorimotor activity may be a modality-independent mechanism which aids emotion discrimination. Copyright © 2010 the authors

    Common Neural Systems Associated with the Recognition of Famous Faces and Names: An Event-Related fMRI Study

    Get PDF
    Person recognition can be accomplished through several modalities (face, name, voice). Lesion, neurophysiology and neuroimaging studies have been conducted in an attempt to determine the similarities and differences in the neural networks associated with person identity via different modality inputs. The current study used event-related functional-MRI in 17 healthy participants to directly compare activation in response to randomly presented famous and non-famous names and faces (25 stimuli in each of the four categories). Findings indicated distinct areas of activation that differed for faces and names in regions typically associated with pre-semantic perceptual processes. In contrast, overlapping brain regions were activated in areas associated with the retrieval of biographical knowledge and associated social affective features. Specifically, activation for famous faces was primarily right lateralized and famous names were left-lateralized. However, for both stimuli, similar areas of bilateral activity were observed in the early phases of perceptual processing. Activation for fame, irrespective of stimulus modality, activated an extensive left hemisphere network, with bilateral activity observed in the hippocampi, posterior cingulate, and middle temporal gyri. Findings are discussed within the framework of recent proposals concerning the neural network of person identification

    Pitch discrimination in optimal and suboptimal acoustic environments : electroencephalographic, magnetoencephalographic, and behavioral evidence

    Get PDF
    Pitch discrimination is a fundamental property of the human auditory system. Our understanding of pitch-discrimination mechanisms is important from both theoretical and clinical perspectives. The discrimination of spectrally complex sounds is crucial in the processing of music and speech. Current methods of cognitive neuroscience can track the brain processes underlying sound processing either with precise temporal (EEG and MEG) or spatial resolution (PET and fMRI). A combination of different techniques is therefore required in contemporary auditory research. One of the problems in comparing the EEG/MEG and fMRI methods, however, is the fMRI acoustic noise. In the present thesis, EEG and MEG in combination with behavioral techniques were used, first, to define the ERP correlates of automatic pitch discrimination across a wide frequency range in adults and neonates and, second, they were used to determine the effect of recorded acoustic fMRI noise on those adult ERP and ERF correlates during passive and active pitch discrimination. Pure tones and complex 3-harmonic sounds served as stimuli in the oddball and matching-to-sample paradigms. The results suggest that pitch discrimination in adults, as reflected by MMN latency, is most accurate in the 1000-2000 Hz frequency range, and that pitch discrimination is facilitated further by adding harmonics to the fundamental frequency. Newborn infants are able to discriminate a 20% frequency change in the 250-4000 Hz frequency range, whereas the discrimination of a 5% frequency change was unconfirmed. Furthermore, the effect of the fMRI gradient noise on the automatic processing of pitch change was more prominent for tones with frequencies exceeding 500 Hz, overlapping with the spectral maximum of the noise. When the fundamental frequency of the tones was lower than the spectral maximum of the noise, fMRI noise had no effect on MMN and P3a, whereas the noise delayed and suppressed N1 and exogenous N2. Noise also suppressed the N1 amplitude in a matching-to-sample working memory task. However, the task-related difference observed in the N1 component, suggesting a functional dissociation between the processing of spatial and non-spatial auditory information, was partially preserved in the noise condition. Noise hampered feature coding mechanisms more than it hampered the mechanisms of change detection, involuntary attention, and the segregation of the spatial and non-spatial domains of working-memory. The data presented in the thesis can be used to develop clinical ERP-based frequency-discrimination protocols and combined EEG and fMRI experimental paradigms.Kyky erottaa korkeat ja matalat äänet toisistaan on yksi aivojen perustoiminnoista. Ilman sitä emme voisi ymmärtää puhetta tai nauttia musiikista. Jotkut potilaat ja hyvin pienet lapset eivät pysty itse kertomaan, kuulevatko he eron vai eivät, mutta heidän aivovasteensa voivat paljastaa sen. Sävelkorkeuden erotteluun liittyvistä aivotoiminnoista ei kuitenkaan tiedetä tarpeeksi edes terveillä aikuisilla. Siksi tarvitaan lisää tämän aihepiirin tutkimusta, jossa käytetään nykyaikaisia aivotutkimusmenetelmiä, kuten tapahtumasidonnaisia herätevasteita (engl. event-related potential, ERP) ja toiminnallista magneettikuvausta (engl. functional magnetic resonance imaging, fMRI). ERP-menetelmä paljastaa, milloin aivot erottavat sävelkorkeuseron, kun taas fMRI paljastaa, mitkä aivoalueet ovat aktivoituneet tässä toiminnossa. Yhdistämällä nämä kaksi menetelmää voidaan saada kokonaisvaltaisempi kuva sävelkorkeuden erotteluun liittyvistä aivotoiminnoista. fMRI-menetelmään liittyy kuitenkin eräs ongelma, nimittäin fMRI-laitteen synnyttämä kova melu, joka voi vaikeuttaa kuuloon liittyvää tutkimusta. Tässä väitöskirjassa tutkitaan, kuinka sävelkorkeuden erottelu voidaan todeta aikuisten ja vastasyntyneiden vauvojen aivoissa ja kuinka fMRI-laitteen melu vaikuttaa kuuloärsykkeiden synnyttämiin ERP-vasteisiin. Tutkimuksen tulokset osoittavat, että aikuisen aivot voivat erottaa niinkin pieniä kuin 2,5 %:n taajuuseroja, mutta erottelu tapahtuu nopeammin n. 1000-2000 Hz:n taajuudella kuin matalammilla tai korkeammilla taajuuksilla. Vastasyntyneen vauvan aivot erottelivat vain yli 20 %:n taajuusmuutoksia. Kun taustalla soitettiin fMRI-laitteen melua, se vaimensi aivovasteita 500-2000 Hz:n äänille enemmän kuin muille äänille. Melu ei kuitenkaan vaikuttanut alle 500 Hz:n äänten synnyttämiin aivovasteisiin. Riippumatta siitä, esitettiinkö taustalla melua vai ei, äänilähteen paikan muutoksen synnyttämä ERP-vaste oli suurempi kuin äänenkorkeuden muutoksen synnyttämä vaste. Tämä väitöskirjatutkimus on osoittanut, että sävelkorkeuden erottelua voidaan tutkia tehokkaasti ERP-menetelmällä sekä aikuisilla että vauvoilla. Tulosten mukaan ERP- ja fMRI-menetelmien yhdistämistä voidaan tehostaa ottamalla kokeiden suunnittelussa huomioon fMRI-laitteen melun vaikutukset ERP-vasteisiin. Tutkimuksen aineistoa voidaan hyödyntää monimutkaisten sävelkorkeuden erottelua mittaavien kokeiden suunnittelussa mm. potilailla ja lapsilla

    The Neurocognition of Prosody

    Get PDF
    Prosody is one of the most undervalued components of language, despite its fulfillment of manifold purposes. It can, for instance, help assign the correct meaning to compounds such as “white house” (linguistic function), or help a listener understand how a speaker feels (emotional function). However, brain-based models that take into account the role prosody plays in dynamic speech comprehension are still rare. This is probably due to the fact that it has proven difficult to fully denote the neurocognitive architecture underlying prosody. This review discusses clinical and neuroscientific evidence regarding both linguistic and emotional prosody. It will become obvious that prosody processing is a multistage operation and that its temporally and functionally distinct processing steps are anchored in a functionally differentiated brain network

    Effects of training and lung volume levels on voice onset control and cortical activation in singers

    Get PDF
    Singers need to counteract respiratory elastic recoil at high and low lung volume levels (LVLs) to maintain consistent airflow and pressure while singing. Professionally trained singers modify their vocal and respiratory systems creating a physiologically stable and perceptually pleasing voice quality at varying LVLs. In manuscript 1, we compared non-singers and singers on the initiation of a voiceless plosive followed by a vowel at low (30% vital capacity, VC), intermediate (50%VC), and high (80%VC) LVLs. In manuscript 2, we examined how vocal students (singers in manuscript 1) learn to control their voice onset at varying LVLs before and after a semester of voice training within a university program. Also examined were the effects of training level and LVLs on cortical activation patterns between non-singers and singers (manuscript 1), and within vocal students before and after training (manuscript 2) using fNIRS. Results revealed decreased control of voice onset initially in singers prior to training as compared to non-singers, but significant improvements in initial voice onset control after training, although task difficulty continued to alter voice physiology throughout. Cortical activation patterns did not change with training but continued to show increased activation during the most difficult tasks, which was more pronounced after training. Professionally trained techniques for consistent, coordinated voice initiation were shown to alter voice onset following plosive consonants with training. However, in non-singers and, as performance improved in singers after training, cortical activation remained greatest during the tasks at low LVLs when difficulty was highest

    Atypical neural responses to vocal anger in attention-deficit/hyperactivity disorder

    Get PDF
    Background Deficits in facial emotion processing, reported in attention-deficit/hyperactivity disorder (ADHD), have been linked to both early perceptual and later attentional components of event-related potentials (ERPs). However, the neural underpinnings of vocal emotion processing deficits in ADHD have yet to be characterised. Here, we report the first ERP study of vocal affective prosody processing in ADHD. Methods Event-related potentials of 6–11-year-old children with ADHD (n = 25) and typically developing controls (n = 25) were recorded as they completed a task measuring recognition of vocal prosodic stimuli (angry, happy and neutral). Audiometric assessments were conducted to screen for hearing impairments. Results Children with ADHD were less accurate than controls at recognising vocal anger. Relative to controls, they displayed enhanced N100 and attenuated P300 components to vocal anger. The P300 effect was reduced, but remained significant, after controlling for N100 effects by rebaselining. Only the N100 effect was significant when children with ADHD and comorbid conduct disorder (n = 10) were excluded. Conclusion This study provides the first evidence linking ADHD to atypical neural activity during the early perceptual stages of vocal anger processing. These effects may reflect preattentive hyper-vigilance to vocal anger in ADHD
    • …
    corecore