125 research outputs found

    Adaptation of respiratory patterns in collaborative reading

    No full text
    International audienceSpeech and variation of respiratory chest circumferences of eight French dyads were monitored while reading texts with increasing constraints on mutual synchrony. In line with previous research, we find that speakers mutually adapt their respiratory patterns. However a significant alignment is observed only when speakers need to perform together, i.e. when reading in alternation or synchronously. From quiet breathing to listening, to speech reading, we didn't find the gradual asymmetric shaping of respiratory cycles generally assumed in literature (e.g. from symmetric inhalation and exhalation phases towards short inhalation and long exhalation). In contrast, the control of breathing seems to switch abruptly between two systems: vital vs. speech production. We also find that the syllabic and the respiratory cycles are strongly phased at speech onsets. This phenomenon is in agreement with the quantal nature of speech rhythm beyond the utterance, previously observed via pause durations

    A brief history of articulatory-acoustic vowel representation

    No full text
    International audienceThis paper aims at following the concept of vowel space across history. It shows that even with very poor experimental means, researchers from the 17 th century started to organize the vowel systems along perceptual dimensions, either articulatory, by means of proprioceptive introspection, or auditory. With the development of experimental devices, and the increasing knowledge in acoustic and articulatory theories in the 19 th century, it is shown how the relationship between the two dimensions tended to tighten. At the mid 20 th century, the link between articulatory parameters such as jaw opening, position of the constriction of the tongue, or lip rounding, and the acoustical values of formants was clear. At this period, with the increasing amount of phonological descriptions of the languages of the world, and the power of the computer database analysis allowing extracting universal tendencies, the question of how the vowel systems are organized arose. The paper discusses this important question, focusing on two points: (1) how the auditory constraints shape the positioning of a specific set of vowel within the acoustic space, and (2) how the articulatory constraints shape the maximal extension of the vowel systems, the so-called maximal vowel space (MVS)

    A possible neurophysiological correlate of audiovisual binding and unbinding in speech perception

    No full text
    International audienceAudiovisual (AV) speech integration of auditory and visual streams generally ends up in a fusion into a single percept. One classical example is the McGurk effect in which incongruent auditory and visual speech signals may lead to a fused percept different from either visual or auditory inputs. In a previous set of experiments, we showed that if a McGurk stimulus is preceded by an incongruent AV context (composed of incongruent auditory and visual speech materials) the amount of McGurk fusion is largely decreased. We interpreted this result in the framework of a two-stage " binding and fusion " model of AV speech perception, with an early AV binding stage controlling the fusion/decision process and likely to produce " unbinding " with less fusion if the context is incoherent. In order to provide further electrophysiological evidence for this binding/unbinding stage, early auditory evoked N1/P2 responses were here compared during auditory, congruent and incongruent AV speech perception, according to either prior coherent or incoherent AV contexts. Following the coherent context, in line with previous electroencephalographic/magnetoencephalographic studies, visual information in the congruent AV condition was found to modify auditory evoked potentials, with a latency decrease of P2 responses compared to the auditory condition. Importantly, both P2 amplitude and latency in the congruent AV condition increased from the coherent to the incoherent context. Although potential contamination by visual responses from the visual cortex cannot be discarded, our results might provide a possible neurophysiological correlate of early binding/unbinding process applied on AV interactions

    Speech in the mirror? Neurobiological correlates of self speech perception

    No full text
    International audienceSelf-awareness and self-recognition during action observation may partly result from a functional matching between action and perception systems. This perception-action interaction enhances the integration between sensory inputs and our own sensory-motor knowledge. We present combined EEG and fMRI studies examining the impact of self-knowledge on multisensory integration mechanisms. More precisely, we investigated this impact during auditory, visual and audio-visual speech perception. Our hypothesis was that hearing and/or viewing oneself talk would facilitate the bimodal integration process and activate sensory-motor maps to a greater extent than observing others. In both studies, half of the stimuli presented the participants’ own productions (self condition) and the other half presented an unknown speaker (other condition). For the “self” condition, we recorded videos of each participant producing/pa/, /ta/ and /ka/ syllables. In the “other” condition, we recorded videos of a speaker the participants had never met producing the same syllables. These recordings were then presented in different modalities: auditory only (A), visual only (V), audio-visual (AV) and incongruent audiovisual (AVi – incongruency referred to different speakers for the audio and video components). In the EEG experiment, 18 participants had to categorize the syllables. In the fMRI experiment, 12 participants had listen to and/or view passively the syllables.In the EEG session, audiovisual interactions were estimated by comparing auditory N1/P2 ERPs during bimodal responses (AV) with the sum of the responses in A and V only conditions (A+V). The amplitude of P2 ERPs was lower for AV than A+V. Importantly, latencies for N1 ERPs were shorter for the “Visual-self” condition than the “Visual-other”, regardless of signal type. In the fMRI session, the presentation modality had an impact on brain activation: activation was stronger for audio or audiovisual stimuli in the superior temporal auditory regions (A= AV=AVi> V), and for video or audiovisual stimuli in MT/V5 and in the premotor cortices (V=AV=AVi> A). In addition, brain activity was stronger in the “self” than the “other” condition both at the left posterior inferior frontal gyrus and cerebellum (lobules I-IV). In line with previous studies on multimodal speech perception, our results point to the existence of integration mechanisms of auditory and visual speech signals. Critically, they further demonstrate a processing advantage when the perceptual situation involves our own speech production. In addition, hearing and/or viewing oneself talk increased activation in the left posterior IFG and cerebellum. These regions are generally responsible for predicting sensory outcomes of action generation. Altogether, these results suggest that viewing our own utterances leads to a temporal facilitation of auditory and visual speech integration. Moreover, processing afferent and efferent signals in sensory-motor areas leads to self -awareness during speech perception

    Speech in the mirror? Neurobiological correlates of self speech perception

    No full text
    International audienceSelf-awareness and self-recognition during action observation may partly result from a functional matching between action and perception systems. This perception-action interaction enhances the integration between sensory inputs and our own sensory-motor knowledge. We present combined EEG and fMRI studies examining the impact of self-knowledge on multisensory integration mechanisms. More precisely, we investigated this impact during auditory, visual and audio-visual speech perception. Our hypothesis was that hearing and/or viewing oneself talk would facilitate the bimodal integration process and activate sensory-motor maps to a greater extent than observing others. In both studies, half of the stimuli presented the participants’ own productions (self condition) and the other half presented an unknown speaker (other condition). For the “self” condition, we recorded videos of each participant producing/pa/, /ta/ and /ka/ syllables. In the “other” condition, we recorded videos of a speaker the participants had never met producing the same syllables. These recordings were then presented in different modalities: auditory only (A), visual only (V), audio-visual (AV) and incongruent audiovisual (AVi – incongruency referred to different speakers for the audio and video components). In the EEG experiment, 18 participants had to categorize the syllables. In the fMRI experiment, 12 participants had listen to and/or view passively the syllables.In the EEG session, audiovisual interactions were estimated by comparing auditory N1/P2 ERPs during bimodal responses (AV) with the sum of the responses in A and V only conditions (A+V). The amplitude of P2 ERPs was lower for AV than A+V. Importantly, latencies for N1 ERPs were shorter for the “Visual-self” condition than the “Visual-other”, regardless of signal type. In the fMRI session, the presentation modality had an impact on brain activation: activation was stronger for audio or audiovisual stimuli in the superior temporal auditory regions (A= AV=AVi> V), and for video or audiovisual stimuli in MT/V5 and in the premotor cortices (V=AV=AVi> A). In addition, brain activity was stronger in the “self” than the “other” condition both at the left posterior inferior frontal gyrus and cerebellum (lobules I-IV). In line with previous studies on multimodal speech perception, our results point to the existence of integration mechanisms of auditory and visual speech signals. Critically, they further demonstrate a processing advantage when the perceptual situation involves our own speech production. In addition, hearing and/or viewing oneself talk increased activation in the left posterior IFG and cerebellum. These regions are generally responsible for predicting sensory outcomes of action generation. Altogether, these results suggest that viewing our own utterances leads to a temporal facilitation of auditory and visual speech integration. Moreover, processing afferent and efferent signals in sensory-motor areas leads to self -awareness during speech perception

    Réorganisation du conduit vocal et réorganisation corticale de la parole : de la perturbation aux lèvres à la glossectomie. Études acoustiques et IRMf

    Get PDF
    21 pagesRetrouver l'usage de la langue - articulateur central de la parole - pour produire les dix voyelles du français, et ce après une opération de glossectomie suivie d'une reconstruction linguale à base d'un muscle de la cuisse (le gracilis), tel était le difficile problème que le patient K.H. (53 ans) a su résoudre au bout de neuf mois. Comment a-t-il pu apprendre à contrôler sa nouvelle " langue" pour produire les différentes voyelles de façon distincte ? C'est ce que nous avons cherché à comprendre dans cette étude. Nous avons mis en place un suivi expérimental longitudinal du patient en enregistrant ses productions acoustiques et son activité cérébrale juste avant l'opération, un mois après l'opération et neuf mois après l'opération. Nous avons ainsi pu suivre la récupération corticale du sujet, en l'occurrence la re-latéralisation dans l'hémisphère gauche pour l'articulation des catégories vocaliques de sa langue, en relation avec l'amélioration de ses performances articulatoires et acoustiques. Plus fondamentalement, cette prouesse atteste de la plasticité corticale phonologique via l'équifinalité compensatoire du système de la langue, en l'occurrence par le jeu des équivalence motrices et acoustiques

    Cerebral correlates of multimodal pointing: An fmri study of prosodic focus, syntactic extraction, digital- and ocular- pointing

    Get PDF
    International audienceDeixis or pointing plays a crucial role in language acquisition and speech communication and can be conveyed in several modalities. The aim of this paper is to explore the cerebral substrate of multimodal pointing actions. We present an fMRI study of pointing including: 1) index finger pointing, 2) eye pointing, 3) prosodic focus production, 4) syntactic extraction (during speech production). Fifteen subjects were examined while they gave digital, ocular and oral responses inside the 3T imager. Results of a random effect group analysis show that digital and prosodic pointings recruit the parietal lobe bilaterally, while ocular and syntactic pointings do not. A grammaticalization process is suggested to explain the lack of parietal activation in the syntactic condition. Further analyses are carried out on the link between digital and prosodic parietal activations

    Représentations cérébrales des articulateurs de la parole

    Get PDF
    National audienceIn order to localize cerebral regions involved in articulatory control processes, ten subjects were examined using functional magnetic resonance imaging while executing lip, tongue and jaw movements. Although the three motor tasks activated a set of common brain areas classically involved in motor control, distinct movement representation sites were found in the motor cortex. These results support and extend previous brain imaging studies by demonstrating a sequential dorsoventral somatotopic organization of lips, jaw and tongue in the motor cortex

    Brain activations in speech recovery process after intra-oral surgery: an fMRI study

    No full text
    International audienceThis study aims at describing cortical and subcortical activation patterns associated with functional recovery of speech production after reconstructive mouth surgery. Our ultimate goal is the understanding of how the brain deals with altered relationships between motor commands and auditory/orosensory feedback, and establishes new inter-articulatory coordination to preserve speech communication abilities. A longitudinal sparse-sampling fMRI study involving orofacial, vowel and syllable production tasks on 9 patients and in three different sessions (one week before, one month and three months after surgery) was conducted. Healthy subjects were recorded in parallel. Results show that for patients in the pre-surgery session, activation patterns are in good agreement with the classical speech production network. Crucially, lower activity in sensorimotor control brain areas during orofacial and speech production movements is observed for patients in all sessions. One month after surgery, the superior parietal lobule is more activated for simple vowel production suggesting a strong involvement of a multimodal integration process to compensate for loss of tongue motor control. Altogether, these results indicate both altered and adaptive sensorimotor control mechanisms in these patients. Index Terms: Neurophonetics, fMRI, speech recovery, motor control, glossectomy, whole-brain analysis, sparse-sampling
    • …
    corecore