334 research outputs found

    Lipreading and Covert Speech Production Similarly Modulate Human Auditory-Cortex Responses to Pure Tones

    Get PDF
    Watching the lips of a speaker enhances speech perception. At the same time, the 100 ms response to speech sounds is suppressed in the observer's auditory cortex. Here, we used whole-scalp 306-channel magnetoencephalography (MEG) to study whether lipreading modulates human auditory processing already at the level of the most elementary sound features, i.e., pure tones. We further envisioned the temporal dynamics of the suppression to tell whether the effect is driven by top-down influences. Nineteen subjects were presented with 50 ms tones spanning six octaves (125–8000 Hz) (1) during “lipreading,” i.e., when they watched video clips of silent articulations of Finnish vowels /a/, /i/, /o/, and /y/, and reacted to vowels presented twice in a row; (2) during a visual control task; (3) during a still-face passive control condition; and (4) in a separate experiment with a subset of nine subjects, during covert production of the same vowels. Auditory-cortex 100 ms responses (N100m) were equally suppressed in the lipreading and covert-speech-production tasks compared with the visual control and baseline tasks; the effects involved all frequencies and were most prominent in the left hemisphere. Responses to tones presented at different times with respect to the onset of the visual articulation showed significantly increased N100m suppression immediately after the articulatory gesture. These findings suggest that the lipreading-related suppression in the auditory cortex is caused by top-down influences, possibly by an efference copy from the speech-production system, generated during both own speech and lipreading.Peer reviewe

    Cortico-limbic morphology separates tinnitus from tinnitus distress

    Get PDF
    Tinnitus is a common auditory disorder characterized by a chronic ringing or buzzing “in the ear.”Despite the auditory-perceptual nature of this disorder, a growing number of studies have reported neuroanatomical differences in tinnitus patients outside the auditory-perceptual system. Some have used this evidence to characterize chronic tinnitus as dysregulation of the auditory system, either resulting from inefficient inhibitory control or through the formation of aversive associations with tinnitus. It remains unclear, however, whether these “non-auditory” anatomical markers of tinnitus are related to the tinnitus signal itself, or merely to negative emotional reactions to tinnitus (i.e., tinnitus distress). In the current study, we used anatomical MRI to identify neural markers of tinnitus, and measured their relationship to a variety of tinnitus characteristics and other factors often linked to tinnitus, such as hearing loss, depression, anxiety, and noise sensitivity. In a new cohort of participants, we confirmed that people with chronic tinnitus exhibit reduced gray matter in ventromedial prefrontal cortex (vmPFC) compared to controls matched for age and hearing loss. This effect was driven by reduced cortical surface area, and was not related to tinnitus distress, symptoms of depression or anxiety, noise sensitivity, or other factors. Instead, tinnitus distress was positively correlated with cortical thickness in the anterior insula in tinnitus patients, while symptoms of anxiety and depression were negatively correlated with cortical thickness in subcallosal anterior cingulate cortex (scACC) across all groups. Tinnitus patients also exhibited increased gyrification of dorsomedial prefrontal cortex (dmPFC), which was more severe in those patients with constant (vs. intermittent) tinnitus awareness. Our data suggest that the neural systems associated with chronic tinnitus are different from those involved in aversive or distressed reactions to tinnitus

    Differential electrophysiological response during rest, self-referential, and non-self-referential tasks in human posteromedial cortex

    Get PDF
    The electrophysiological basis for higher brain activity during rest and internally directed cognition within the human default mode network (DMN) remains largely unknown. Here we use intracranial recordings in the human posteromedial cortex (PMC), a core node within the DMN, during conditions of cued rest, autobiographical judgments, and arithmetic processing. We found a heterogeneous profile of PMC responses in functional, spatial, and temporal domains. Although the majority of PMC sites showed increased broad gamma band activity (30-180 Hz) during rest, some PMC sites, proximal to the retrosplenial cortex, responded selectively to autobiographical stimuli. However, no site responded to both conditions, even though they were located within the boundaries of the DMN identified with resting-state functional imaging and similarly deactivated during arithmetic processing. These findings, which provide electrophysiological evidence for heterogeneity within the core of the DMN, will have important implications for neuroimaging studies of the DMN

    Topological Evolution of Dynamical Networks: Global Criticality from Local Dynamics

    Full text link
    We evolve network topology of an asymmetrically connected threshold network by a simple local rewiring rule: quiet nodes grow links, active nodes lose links. This leads to convergence of the average connectivity of the network towards the critical value Kc=2K_c =2 in the limit of large system size NN. How this principle could generate self-organization in natural complex systems is discussed for two examples: neural networks and regulatory networks in the genome.Comment: 4 pages RevTeX, 4 figures PostScript, revised versio

    Stimulus-Related Independent Component and Voxel-Wise Analysis of Human Brain Activity during Free Viewing of a Feature Film

    Get PDF
    Understanding how the brain processes stimuli in a rich natural environment is a fundamental goal of neuroscience. Here, we showed a feature film to 10 healthy volunteers during functional magnetic resonance imaging (fMRI) of hemodynamic brain activity. We then annotated auditory and visual features of the motion picture to inform analysis of the hemodynamic data. The annotations were fitted to both voxel-wise data and brain network time courses extracted by independent component analysis (ICA). Auditory annotations correlated with two independent components (IC) disclosing two functional networks, one responding to variety of auditory stimulation and another responding preferentially to speech but parts of the network also responding to non-verbal communication. Visual feature annotations correlated with four ICs delineating visual areas according to their sensitivity to different visual stimulus features. In comparison, a separate voxel-wise general linear model based analysis disclosed brain areas preferentially responding to sound energy, speech, music, visual contrast edges, body motion and hand motion which largely overlapped the results revealed by ICA. Differences between the results of IC- and voxel-based analyses demonstrate that thorough analysis of voxel time courses is important for understanding the activity of specific sub-areas of the functional networks, while ICA is a valuable tool for revealing novel information about functional connectivity which need not be explained by the predefined model. Our results encourage the use of naturalistic stimuli and tasks in cognitive neuroimaging to study how the brain processes stimuli in rich natural environments

    Inter- versus intramodal integration in sensorimotor synchronization: a combined behavioral and magnetoencephalographic study

    Get PDF
    Although the temporal occurrence of the pacing signal is predictable in sensorimotor synchronization tasks, normal subjects perform on-the-beat-tapping to an isochronous auditory metronome with an anticipatory error. This error originates from an intermodal task, that is, subjects have to bring information from the auditory and tactile modality to coincide. The aim of the present study was to illuminate whether the synchronization error is a finding specific to an intermodal timing task and whether the underlying cortical mechanisms are modality-specific or supramodal. We collected behavioral data and cortical evoked responses by magneto-encephalography (MEG) during performance of cross- and unimodal tapping-tasks. As expected, subjects showed negative asynchrony in performing an auditorily paced tapping task. However, no asynchrony emerged during tactile pacing, neither during pacing at the opposite finger nor at the toe. Analysis of cortical signals resulted in a three dipole model best explaining tap-contingent activity in all three conditions. The temporal behavior of the sources was similar between the conditions and, thus, modality independent. The localization of the two earlier activated sources was modality-independent as well whereas location of the third source varied with modality. In the auditory pacing condition it was localized in contralateral primary somatosensory cortex, during tactile pacing it was localized in contralateral posterior parietal cortex. In previous studies with auditory pacing the functional role of this third source was contradictory: A special temporal coupling pattern argued for involvement of the source in evaluating the temporal distance between tap and click whereas subsequent data gave no evidence for such an interpretation. Present data shed new light on this question by demonstrating differences between modalities in the localization of the third source with similar temporal behavior

    Parallel Evolution of Auditory Genes for Echolocation in Bats and Toothed Whales

    Get PDF
    The ability of bats and toothed whales to echolocate is a remarkable case of convergent evolution. Previous genetic studies have documented parallel evolution of nucleotide sequences in Prestin and KCNQ4, both of which are associated with voltage motility during the cochlear amplification of signals. Echolocation involves complex mechanisms. The most important factors include cochlear amplification, nerve transmission, and signal re-coding. Herein, we screen three genes that play different roles in this auditory system. Cadherin 23 (Cdh23) and its ligand, protocadherin 15 (Pcdh15), are essential for bundling motility in the sensory hair. Otoferlin (Otof) responds to nerve signal transmission in the auditory inner hair cell. Signals of parallel evolution occur in all three genes in the three groups of echolocators—two groups of bats (Yangochiroptera and Rhinolophoidea) plus the dolphin. Significant signals of positive selection also occur in Cdh23 in the Rhinolophoidea and dolphin, and Pcdh15 in Yangochiroptera. In addition, adult echolocating bats have higher levels of Otof expression in the auditory cortex than do their embryos and non-echolocation bats. Cdh23 and Pcdh15 encode the upper and lower parts of tip-links, and both genes show signals of convergent evolution and positive selection in echolocators, implying that they may co-evolve to optimize cochlear amplification. Convergent evolution and expression patterns of Otof suggest the potential role of nerve and brain in echolocation. Our synthesis of gene sequence and gene expression analyses reveals that positive selection, parallel evolution, and perhaps co-evolution and gene expression affect multiple hearing genes that play different roles in audition, including voltage and bundle motility in cochlear amplification, nerve transmission, and brain function

    Voice-based assessments of trustworthiness, competence, and warmth in blind and sighted adults

    Get PDF
    The study of voice perception in congenitally blind individuals allows researchers rare insight into how a lifetime of visual deprivation affects the development of voice perception. Previous studies have suggested that blind adults outperform their sighted counterparts in low-level auditory tasks testing spatial localization and pitch discrimination, as well as in verbal speech processing; however, blind persons generally show no advantage in nonverbal voice recognition or discrimination tasks. The present study is the first to examine whether visual experience influences the development of social stereotypes that are formed on the basis of nonverbal vocal characteristics (i.e., voice pitch). Groups of 27 congenitally or early-blind adults and 23 sighted controls assessed the trustworthiness, competence, and warmth of men and women speaking a series of vowels, whose voice pitches had been experimentally raised or lowered. Blind and sighted listeners judged both men’s and women’s voices with lowered pitch as being more competent and trustworthy than voices with raised pitch. In contrast, raised-pitch voices were judged as being warmer than were lowered-pitch voices, but only for women’s voices. Crucially, blind and sighted persons did not differ in their voice-based assessments of competence or warmth, or in their certainty of these assessments, whereas the association between low pitch and trustworthiness in women’s voices was weaker among blind than sighted participants. This latter result suggests that blind persons may rely less heavily on nonverbal cues to trustworthiness compared to sighted persons. Ultimately, our findings suggest that robust perceptual associations that systematically link voice pitch to the social and personal dimensions of a speaker can develop without visual input

    A neural oscillations perspective on phonological development and phonological processing in developmental dyslexia

    Get PDF
    Children’s ability to reflect upon and manipulate the sounds in words (’phonological awareness’) develops as part of natural language acquisition, supports reading acquisition, and develops further as reading and spelling are learned. Children with developmental dyslexia typically have impairments in phonological awareness. Many developmental factors contribute to individual differences in phonological development. One important source of individual differences may be the child’s sensory/neural processing of the speech signal from an amplitude modulation (~ energy or intensity variation) perspective, which may affect the quality of the sensory/neural representations (’phonological representations’) that support phonological awareness. During speech encoding, brain electrical rhythms (oscillations, rhythmic variations in neural excitability) re-calibrate their temporal activity to be in time with rhythmic energy variations in the speech signal. The accuracy of this neural alignment or ’entrainment’ process is related to speech intelligibility. Recent neural studies demonstrate atypical oscillatory function at slower rates in children with developmental dyslexia. Potential relations with the development of phonological awareness by children with dyslexia are discussed.Medical Research Council, G0400574 and G090237

    Efficacy and safety of bilateral continuous theta burst stimulation (cTBS) for the treatment of chronic tinnitus: design of a three-armed randomized controlled trial

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Tinnitus, the perception of sound and noise in absence of an auditory stimulus, has been shown to be associated with maladaptive neuronal reorganization and increased activity of the temporoparietal cortex. Transient modulation of tinnitus by repetitive transcranial magnetic stimulation (rTMS) indicated that these areas are critically involved in the pathophysiology of tinnitus and suggested new treatment strategies. However, the therapeutic efficacy of rTMS in tinnitus is still unclear, individual response is variable, and the optimal stimulation area disputable. Recently, continuous theta burst stimulation (cTBS) has been put forward as an effective rTMS protocol for the reduction of pathologically enhanced cortical excitability.</p> <p>Methods</p> <p>48 patients with chronic subjective tinnitus will be included in this randomized, placebo controlled, three-arm trial. The treatment consists of two trains of cTBS applied bilaterally to the secondary auditory cortex, the temporoparietal associaction cortex, or to the lower occiput (sham condition) every working day for four weeks. Primary outcome measure is the change of tinnitus distress as quantified by the Tinnitus Questionnaire (TQ). Secondary outcome measures are tinnitus loudness and annoyance as well as tinnitus change during and after treatment. Audiologic and speech audiometric measurements will be performed to assess potential side effects. The aim of the present trail is to investigate effectiveness and safety of a four weeks cTBS treatment on chronic tinnitus and to compare two areas of stimulation. The results will contribute to clarify the therapeutic capacity of rTMS in tinnitus.</p> <p>Trial registration</p> <p>The trial was registered with the clinical trials register of <url>http://www.clinicaltrials.gov</url> (NCT00518024).</p
    • 

    corecore