32 research outputs found

    Recruitment of Language-, Emotion- and Speech-Timing Associated Brain Regions for Expressing Emotional Prosody: Investigation of Functional Neuroanatomy with fMRI

    Get PDF
    We aimed to progress understanding of prosodic emotion expression by establishing brain regions active when expressing specific emotions, those activated irrespective of the target emotion, and those whose activation intensity varied depending on individual performance. BOLD contrast data were acquired whilst participants spoke non-sense words in happy, angry or neutral tones, or performed jaw-movements. Emotion-specific analyses demonstrated that when expressing angry prosody, activated brain regions included the inferior frontal and superior temporal gyri, the insula, and the basal ganglia. When expressing happy prosody, the activated brain regions also included the superior temporal gyrus, insula, and basal ganglia, with additional activation in the anterior cingulate. Conjunction analysis confirmed that the superior temporal gyrus and basal ganglia were activated regardless of the specific emotion concerned. Nevertheless, disjunctive comparisons between the expression of angry and happy prosody established that anterior cingulate activity was significantly higher for angry prosody than for happy prosody production. Degree of inferior frontal gyrus activity correlated with the ability to express the target emotion through prosody. We conclude that expressing prosodic emotions (vs. neutral intonation) requires generic brain regions involved in comprehending numerous aspects of language, emotion-related processes such as experiencing emotions, and in the time-critical integration of speech information

    Temporal Processing in Low-Frequency Channels: Effects of Age and Hearing Loss in Middle-Aged Listeners

    No full text
    Background: Hearing loss and age interfere with the auditory system\u27s ability to process temporal changes in the acoustic signal. A key unresolved question is whether high-frequency sensorineural hearing loss (HFSNHL) affects temporal processing in the low-frequency region where hearing loss is minimal or nonexistent. A second unresolved question is whether changes in hearing occur in middle-aged subjects in the absence of HFSNHL. Purpose: The purpose of this study was twofold: (1) to examine the influence of HFSNHL and aging on the auditory temporal processing abilities of low-frequency auditory channels with normal hearing sensitivity and (2) to examine the relations among gap detection measures, self-assessment reports of understanding speech, and functional measures of speech perception in middle-aged individuals with and without HFSNHL. Research Design: The subject groups were matched for either age (middle age) or pure-tone sensitivity (with or without hearing loss) to study the effects of age and HFSNHL on behavioral and functional measures of temporal processing and word recognition performance. These effects were analyzed by individual repeated-measures analyses of variance. Post hoc analyses were performed for each significant main effect and interaction. The relationships among the measures were analyzed with Pearson correlations. Study Sample: Eleven normal-hearing young adults (YNH), eight normal-hearing middle-aged adults (MANH), and nine middle-aged adults with HFSNHL were recruited for this study. Normal hearing sensitivity was defined as pure-tone thresholds ≤25 dB HL for octave frequencies from 250 to 8000 Hz. HFSNHL was defined as pure-tone thresholds ≤25 dB HL from 250 to 2000 Hz and ≥35 dB HL from 3000 to 8000 Hz. Data Collection and Analysis: Gap detection thresholds (GDTs) were measured under within-channel and between-channel conditions with the stimulus spectrum limited to regions of normal hearing sensitivity for the HFSNHL group (i.e., \u3c2000 \u3eHz). Self-perceived hearing problems were measured by a questionnaire (Abbreviated Profile of Hearing Aid Benefit), and word recognition performance was assessed under four conditions: quiet and babble, with and without low-pass filtering (cutoff frequency = 2000 Hz). Results: The effects of HFSNHL and age were found for gap detection, self-perceived hearing problems, and word recognition in noise. The presence of HFSNHL significantly increased GDTs for stimuli presented in regions of normal pure-tone sensitivity. In addition, middle-aged subjects with normal hearing sensitivity reported significantly more problems hearing in background noise than the young normal-hearing subjects. Significant relationships between self-report measures of hearing ability in background noise and word recognition in babble were found. Conclusions: The conclusions from the present study are twofold: (1) HFSNHL may have an off-channel impact on auditory temporal processing, and (2) presenescent changes in the auditory system of MANH subjects increased self-perceived problems hearing in background noise and decreased functional performance in background noise compared with YNH subjects

    Amplitude-Modulated Auditory Steady-State Responses in Younger and Older Listeners

    No full text
    The primary purpose of this investigation was to determine whether temporal coding in the auditory system was the same for younger and older listeners. Temporal coding was assessed by amplitude-modulated auditory steady-state responses (AM ASSRs) as a physiologic measure of phase-locking capability. The secondary purpose of this study was to determine whether AM ASSRs were related to behavioral speech understanding ability. AM ASSRs showed that the ability of the auditory system to phase lock to a temporally altered signal is dependent on modulation rate, carrier frequency, and age of the listener. Specifically, the interaction of frequency and age showed that younger listeners had more phase locking than old listeners at 500 Hz. The number of phase-locked responses for the 500 Hz carrier frequency was significantly correlated to word-recognition performance. In conclusion, the effect of aging on temporal processing, as measured by phase locking with AM ASSRs, was found for low-frequency stimuli where phase locking in the auditory system should be optimal. The exploration, and use, of electrophysiologic responses to measure auditory timing analysis in humans has the potential to facilitate the understanding of speech perception difficulties in older listeners

    Auditory Steady State Responses Recorded in Multitalker Babble

    No full text
    Objective: The primary purpose of this investigation was to determine the effect of multitalker babble on ASSRs in adult subjects with normal hearing (NH) and sensorineural hearing loss (HI). The secondary purpose was to investigate the relationships among ASSRs, word recognition in quiet, and word recognition in babble. Design: ASSRs were elicited by a complex mixed-modulation tonal stimulus (carrier frequencies of 500, 1500, 2500, and 4000 Hz; modulation rate of 40 or 90 Hz) presented in quiet and in babble. The level of each carrier frequency was adjusted to match the level of the multitalker babble spectrum, which was based on the long term speech spectrum average. Word recognition in noise (WIN) performance was measured and correlated to ASSR amplitude and ASSR detection rate. Study Sample: Nineteen normal-hearing adults and nineteen adults with sensorineural hearing loss were recruited. Results and Conclusions: The presence of babble significantly reduced the ASSR detection rate and ASSR amplitude for NH subjects, but had minimal effect on ASSRs for HI subjects. In addition, babble enhanced ASSR amplitude at high stimulus levels. ASSR detection rate and ASSR amplitude recorded in quiet and babble were significantly correlated with word recognition performance for NH and HI subjects. Sumario Objetivo: El objetivo fundamental de esta investigación fue determinar el efecto de balbuceo de hablantes múltiples en los ASSR de adultos jóvenes con audición normal (NH) y con pérdidas auditivas sensorineurales (HI). El objetivo secundario fue investigar las relaciones entre los ASSR, el reconocimiento de palabras en silencio y el reconocimiento de palabras con en medio de balbuceo. Diseño: Los ASSR fueron evocados por estímulo tonal de modulación mezclada compleja (frecuencias portadoras de 500, 1500, 2500 y 4000 Hz; tasa de modulación de 40 o 90 Hz) presentadas en silencio y con el balbuceo. Se ajustó el nivel de cada frecuencia portadora para emparejar el nivel del espectro del balbuceo de hablantes múltiples, el cual se basó en el promedio del espectro a largo plazo. Se midió el rendimiento para el reconocimiento de palabras en ruido (WIN) y se correlacionó con la amplitud de los ASSR y con la tasa de detección de los ASSR. Muestra Del Estudio: Se reclutaron diez y nueve adultos normoyentes y diez y nueve adultos con pérdida auditiva sensorineural. Resultados Y Conclusiones: La presencia del balbuceo reduce significativamente la tasa de detección de los ASSR y la amplitud de los ASSR en sujetos NH, pero tiene efectos mínimos en los ASSR de sujetos HI. Además, el balbuceo aumenta la amplitud de los ASSR con estímulos de niveles altos. La tasa de detección de los ASSR y la amplitud de los ASSR registrada en silencio y con balbuceo, fueron significativamente correlacionadas con el rendimiento para reconocer palabras en sujetos NH y HI

    Hearing

    No full text
    corecore