7,720 research outputs found

    Speech Sensorimotor Learning through a Virtual Vocal Tract

    Get PDF
    Studies of speech sensorimotor learning often manipulate auditory feedback by modifying isolated acoustic parameters such as formant frequency or fundamental frequency using near real-time resynthesis of a participant\u27s speech. An alternative approach is to engage a participant in a total remapping of the sensorimotor working space using a virtual vocal tract. To support this approach for studying speech sensorimotor learning we have developed a system to control an articulatory synthesizer using electromagnetic articulography data. Articulator movement data from the NDI Wave System are streamed to a Maeda articulatory synthesizer. The resulting synthesized speech provides auditory feedback to the participant. This approach allows the experimenter to generate novel articulatory-acoustic mappings. Moreover, the acoustic output of the synthesizer can be perturbed using acoustic resynthesis methods. Since no robust speech-acoustic signal is required from the participant, this system will allow for the study of sensorimotor learning in any individuals, even those with severe speech disorders. In the current work we present preliminary results that demonstrate that typically-functioning participants can use a virtual vocal tract to produce diphthongs within a novel articulatory-acoustic workspace. Once sufficient baseline performance is established, perturbations to auditory feedback (formant shifting) can elicit compensatory and adaptive articulatory responses

    Learning to Produce Speech with an Altered Vocal Tract: The Role of Auditory Feedback

    Get PDF
    Modifying the vocal tract alters a speaker’s previously learned acoustic–articulatory relationship. This study investigated the contribution of auditory feedback to the process of adapting to vocal-tract modifications. Subjects said the word /tɑs/ while wearing a dental prosthesis that extended the length of their maxillary incisor teeth. The prosthesis affected /s/ productions and the subjects were asked to learn to produce ‘‘normal’’ /s/’s. They alternately received normal auditory feedback and noise that masked their natural feedback during productions. Acoustic analysis of the speakers’ /s/ productions showed that the distribution of energy across the spectra moved toward that of normal, unperturbed production with increased experience with the prosthesis. However, the acoustic analysis did not show any significant differences in learning dependent on auditory feedback. By contrast, when naive listeners were asked to rate the quality of the speakers’ utterances, productions made when auditory feedback was available were evaluated to be closer to the subjects’ normal productions than when feedback was masked. The perceptual analysis showed that speakers were able to use auditory information to partially compensate for the vocal-tract modification. Furthermore, utterances produced during the masked conditions also improved over a session, demonstrating that the compensatory articulations were learned and available after auditory feedback was removed

    Motor Equivalence in Speech Production

    No full text
    International audienceThe first section provides a description of the concepts of “motor equivalence” and “degrees of freedom”. It is illustrated with a few examples of motor tasks in general and of speech production tasks in particular. In the second section, the methodology used to investigate experimentally motor equivalence phenomena in speech production is presented. It is mainly based on paradigms that perturb the perception-action loop during on-going speech, either by limiting the degrees of freedom of the speech motor system, or by changing the physical conditions of speech production or by modifying the feedback information. Examples are provided for each of these approaches. Implications of these studies for a better understanding of speech production and its interactions with speech perception are presented in the last section. Implications are mainly related to characterization of the mechanisms underlying interarticulatory coordination and to the analysis of the speech production goals

    Motor Equivalence in Speech Production

    No full text
    International audienceThe first section provides a description of the concepts of “motor equivalence” and “degrees of freedom”. It is illustrated with a few examples of motor tasks in general and of speech production tasks in particular. In the second section, the methodology used to investigate experimentally motor equivalence phenomena in speech production is presented. It is mainly based on paradigms that perturb the perception-action loop during on-going speech, either by limiting the degrees of freedom of the speech motor system, or by changing the physical conditions of speech production or by modifying the feedback information. Examples are provided for each of these approaches. Implications of these studies for a better understanding of speech production and its interactions with speech perception are presented in the last section. Implications are mainly related to characterization of the mechanisms underlying interarticulatory coordination and to the analysis of the speech production goals

    The role of gesture delay in coda /r/ weakening: an articulatory, auditory and acoustic study

    Get PDF
    The cross-linguistic tendency of coda consonants to weaken, vocalize, or be deleted is shown to have a phonetic basis, resulting from gesture reduction, or variation in gesture timing. This study investigates the effects of the timing of the anterior tongue gesture for coda /r/ on acoustics and perceived strength of rhoticity, making use of two sociolects of Central Scotland (working- and middle-class) where coda /r/ is weakening and strengthening, respectively. Previous articulatory analysis revealed a strong tendency for these sociolects to use different coda /r/ tongue configurations—working- and middle-class speakers tend to use tip/front raised and bunched variants, respectively; however, this finding does not explain working-class /r/ weakening. A correlational analysis in the current study showed a robust relationship between anterior lingual gesture timing, F3, and percept of rhoticity. A linear mixed effects regression analysis showed that both speaker social class and linguistic factors (word structure and the checked/unchecked status of the prerhotic vowel) had significant effects on tongue gesture timing and formant values. This study provides further evidence that gesture delay can be a phonetic mechanism for coda rhotic weakening and apparent loss, but social class emerges as the dominant factor driving lingual gesture timing variation

    Introduction: The Fourth International Workshop on Epigenetic Robotics

    Get PDF
    As in the previous editions, this workshop is trying to be a forum for multi-disciplinary research ranging from developmental psychology to neural sciences (in its widest sense) and robotics including computational studies. This is a two-fold aim of, on the one hand, understanding the brain through engineering embodied systems and, on the other hand, building artificial epigenetic systems. Epigenetic contains in its meaning the idea that we are interested in studying development through interaction with the environment. This idea entails the embodiment of the system, the situatedness in the environment, and of course a prolonged period of postnatal development when this interaction can actually take place. This is still a relatively new endeavor although the seeds of the developmental robotics community were already in the air since the nineties (Berthouze and Kuniyoshi, 1998; Metta et al., 1999; Brooks et al., 1999; Breazeal, 2000; Kozima and Zlatev, 2000). A few had the intuition – see Lungarella et al. (2003) for a comprehensive review – that, intelligence could not be possibly engineered simply by copying systems that are “ready made” but rather that the development of the system fills a major role. This integration of disciplines raises the important issue of learning on the multiple scales of developmental time, that is, how to build systems that eventually can learn in any environment rather than program them for a specific environment. On the other hand, the hope is that robotics might become a new tool for brain science similarly to what simulation and modeling have become for the study of the motor system. Our community is still pretty much evolving and “under construction” and for this reason, we tried to encourage submissions from the psychology community. Additionally, we invited four neuroscientists and no roboticists for the keynote lectures. We received a record number of submissions (more than 50), and given the overall size and duration of the workshop together with our desire to maintain a single-track format, we had to be more selective than ever in the review process (a 20% acceptance rate on full papers). This is, if not an index of quality, at least an index of the interest that gravitates around this still new discipline

    Neural Modeling and Imaging of the Cortical Interactions Underlying Syllable Production

    Full text link
    This paper describes a neural model of speech acquisition and production that accounts for a wide range of acoustic, kinematic, and neuroimaging data concerning the control of speech movements. The model is a neural network whose components correspond to regions of the cerebral cortex and cerebellum, including premotor, motor, auditory, and somatosensory cortical areas. Computer simulations of the model verify its ability to account for compensation to lip and jaw perturbations during speech. Specific anatomical locations of the model's components are estimated, and these estimates are used to simulate fMRI experiments of simple syllable production with and without jaw perturbations.National Institute on Deafness and Other Communication Disorders (R01 DC02852, RO1 DC01925

    Speech planning in 4-year-old children versus adults: Acoustic and articulatory analyses

    No full text
    International audienceThis study investigates speech motor control in 4-year-old Canadian French children in comparison with adults. It focuses on measures of token-to-token variability in the production of isolated vowels and on anticipatory extra-syllabic coarticulation within V 1-C-V 2 sequences. Acoustic and ultrasound articulatory data were recorded. Acoustic data from 20 children and 10 adults have been analyzed. Thus far, ultrasound data have been analyzed from a subset of these participants: 6 children and 2 adults. In agreement with former studies, token-to-token variability was greater in children than in adults. Strong anticipation of V 2 in V 1 was found in all adults, but not in children. Most of the children showed no anticipation at all and some of them showed a small amount of anticipation along the antero-posterior dimension only, manifested in the acoustic F2 dimension. These results are interpreted as evidence for the immaturity of children's speech motor control from two perspectives: insufficiently stable motor control patterns for vowel production, and a lack of effectiveness in anticipating forthcoming gestures. In line with theories of optimal motor control, anticipatory coarticulation is assumed to be based on the use of internal models of the speech apparatus and the increasing maturation of these representations as speech develops
    corecore