21,211 research outputs found

    Neural correlates of the processing of co-speech gestures

    Get PDF
    In communicative situations, speech is often accompanied by gestures. For example, speakers tend to illustrate certain contents of speech by means of iconic gestures which are hand movements that bear a formal relationship to the contents of speech. The meaning of an iconic gesture is determined both by its form as well as the speech context in which it is performed. Thus, gesture and speech interact in comprehension. Using fMRI, the present study investigated what brain areas are involved in this interaction process. Participants watched videos in which sentences containing an ambiguous word (e.g. She touched the mouse) were accompanied by either a meaningless grooming movement, a gesture supporting the more frequent dominant meaning (e.g. animal) or a gesture supporting the less frequent subordinate meaning (e.g. computer device). We hypothesized that brain areas involved in the interaction of gesture and speech would show greater activation to gesture-supported sentences as compared to sentences accompanied by a meaningless grooming movement. The main results are that when contrasted with grooming, both types of gestures (dominant and subordinate) activated an array of brain regions consisting of the left posterior superior temporal sulcus (STS), the inferior parietal lobule bilaterally and the ventral precentral sulcus bilaterally. Given the crucial role of the STS in audiovisual integration processes, this activation might reflect the interaction between the meaning of gesture and the ambiguous sentence. The activations in inferior frontal and inferior parietal regions may reflect a mechanism of determining the goal of co-speech hand movements through an observation-execution matching process

    Acoustic, psychophysical, and neuroimaging measurements of the effectiveness of active cancellation during auditory functional magnetic resonance imaging

    Get PDF
    Functional magnetic resonance imaging (fMRI) is one of the principal neuroimaging techniques for studying human audition, but it generates an intense background sound which hinders listening performance and confounds measures of the auditory response. This paper reports the perceptual effects of an active noise control (ANC) system that operates in the electromagnetically hostile and physically compact neuroimaging environment to provide significant noise reduction, without interfering with image quality. Cancellation was first evaluated at 600 Hz, corresponding to the dominant peak in the power spectrum of the background sound and at which cancellation is maximally effective. Microphone measurements at the ear demonstrated 35 dB of acoustic attenuation [from 93 to 58 dB sound pressure level (SPL)], while masked detection thresholds improved by 20 dB (from 74 to 54 dB SPL). Considerable perceptual benefits were also obtained across other frequencies, including those corresponding to dips in the spectrum of the background sound. Cancellation also improved the statistical detection of sound-related cortical activation, especially for sounds presented at low intensities. These results confirm that ANC offers substantial benefits for fMRI research

    Towards Automatic Speech Identification from Vocal Tract Shape Dynamics in Real-time MRI

    Full text link
    Vocal tract configurations play a vital role in generating distinguishable speech sounds, by modulating the airflow and creating different resonant cavities in speech production. They contain abundant information that can be utilized to better understand the underlying speech production mechanism. As a step towards automatic mapping of vocal tract shape geometry to acoustics, this paper employs effective video action recognition techniques, like Long-term Recurrent Convolutional Networks (LRCN) models, to identify different vowel-consonant-vowel (VCV) sequences from dynamic shaping of the vocal tract. Such a model typically combines a CNN based deep hierarchical visual feature extractor with Recurrent Networks, that ideally makes the network spatio-temporally deep enough to learn the sequential dynamics of a short video clip for video classification tasks. We use a database consisting of 2D real-time MRI of vocal tract shaping during VCV utterances by 17 speakers. The comparative performances of this class of algorithms under various parameter settings and for various classification tasks are discussed. Interestingly, the results show a marked difference in the model performance in the context of speech classification with respect to generic sequence or video classification tasks.Comment: To appear in the INTERSPEECH 2018 Proceeding

    Vestibular schwannoma and ipsilateral endolymphatic hydrops: an unusual association

    Get PDF
    Vestibular schwannoma and endolymphatic hydrops are two conditions that may present with similar audio-vestibular symptoms. The association of the two in the same patient represents an unusual nding that may lead clinicians to errors and delays in diagnosis and clinical management of affected subjects. We discuss the case of a patient with an intrameatal vestibular schwannoma reporting symptoms suggestive for ipsilateral endolymphatic hydrops. The patient presented with uctuating hearing loss, tinnitus, and acute rotatory vertigo episodes, and underwent a full audiological evaluation and imaging of the brain with contrast-enhanced Magnetic Resonance Imaging. Clinical audio-vestibular and radiological examination con rmed the presence of coexisting vestibular schwannoma and endolymphatic hydrops. Hydrops was treated pharmacologically; vestibular schwannoma was monitored over time with a wait and scan protocol through conventional MRI. The association of vestibular schwannoma and endolymphatic hydrops is rare, but represents a possible nding in clinical practice. It is therefore recommended investigating the presence of inner ear disorders in patients with vestibular schwannoma and, similarly, to exclude the presence of this condition in patients with symptoms typical of inner ear disorders

    Crossed Aphasia in a Patient with Anaplastic Astrocytoma of the Non-Dominant Hemisphere

    Get PDF
    Aphasia describes a spectrum of speech impairments due to damage in the language centers of the brain. Insult to the inferior frontal gyrus of the dominant cerebral hemisphere results in Broca\u27s aphasia - the inability to produce fluent speech. The left cerebral hemisphere has historically been considered the dominant side, a characteristic long presumed to be related to a person\u27s handedness . However, recent studies utilizing fMRI have shown that right hemispheric dominance occurs more frequently than previously proposed and despite a person\u27s handedness. Here we present a case of a right-handed patient with Broca\u27s aphasia caused by a right-sided brain tumor. This is significant not only because the occurrence of aphasia in right-handed-individuals with right hemispheric brain damage (so-called crossed aphasia ) is unusual but also because such findings support dissociation between hemispheric linguistic dominance and handedness. © 2017, EduRad. All rights reserved
    corecore