5 research outputs found

    Two and three-dimensional visual articulatory models for pronunciation training and for treatment of speech disorders

    Get PDF
    Visual articulatory models can be used for visualizing vocal tract articulatory speech movements. This information may be helpful in pronunciation training or in therapy of speech disorders. For testing this hypothesis, speech recognition rates were quantified for mute animations of vocalic and consonantal speech movements generated by a 2D and a 3D visual articulatory model. The visually based speech sound recognition test (mimicry test) was performed by two groups of eight children (five to eight years old) matched in age and sex. The children were asked to mimic the visually produced mute speech movement animations for different speech sounds. Recognition rates stay significantly above chance but indicate no significant difference for each of the two models. Children older than 5 years are capable of interpreting vocal tract articulatory speech sound movements without any preparatory training in a speech adequate way. The complex 3D-display of vocal tract articulatory movements provides no significant advantage in comparison to the visually simpler 2D-midsagittal displays of vocal tract articulatory movements

    Articulation therapy for children with cleft palate using visual articulatory models and ultrasound biofeedback

    Get PDF
    Visual biofeedback tools, such as Electropalatography (EPG), are recommended for assessing and treating speech sound disorders (SSDs) associated with Cleft Palate (CP). However, EPG is not suitable for all clients, due to dependencies on stable dentition and timing of palatal repair. Ultrasound is becoming increasingly popular for its use in treating SSDs, with no reports on its dependency on structure of the vocal tract. However its clinical application in the CP population remains to be tested. We compared Visual Articulatory Models (VAMs) with Ultrasound for the treatment of SSDs in two children with repaired submucous CP. Both children received two blocks of therapy each with eight sessions, with the first block using VAMs and the second using ultrasound. Results showed that both children improved overall, with more improvement found in the first block of therapy using VAMs

    Articulation therapy for children with cleft palate using visual articulatory models and ultrasound biofeedback

    Get PDF
    Visual biofeedback tools, such as Electropalatography (EPG), are recommended for assessing and treating speech sound disorders (SSDs) associated with Cleft Palate (CP). However, EPG is not suitable for all clients, due to dependencies on stable dentition and timing of palatal repair. Ultrasound is becoming increasingly popular for its use in treating SSDs, with no reports on its dependency on structure of the vocal tract. However its clinical application in the CP population remains to be tested. We compared Visual Articulatory Models (VAMs) with Ultrasound for the treatment of SSDs in two children with repaired submucous CP. Both children received two blocks of therapy each with eight sessions, with the first block using VAMs and the second using ultrasound. Results showed that both children improved overall, with more improvement found in the first block of therapy using VAMs.https://www.internationalphoneticassociation.org/icphs/icphs2015caslpub3947pub85

    Tongue control and its implication in pronunciation training

    Get PDF
    International audiencePronunciation training based on speech production techniques illustrating tongue movements is gaining popularity. However, there is not sufficient evidence that learners can imitate some tongue animation. In this paper, we argue that although controlling tongue movement related to speech is not such an easy task, training with visual feedback improves its control. We investigated human awareness of controlling their tongue body gestures. In a first experiment, participants were asked to perform some tongue movements composed of two sets of gestures. This task was evaluated by observing ultrasound imaging of the tongue recorded during the experiment. No feedback was provided. In a second experiment, a short session of training was added where participants can observe ultrasound imaging in real time of their own tongue movements. The goal was to increase their awareness of their tongue gestures. A pretest and posttest were carried out without any feedback. The results suggest that without a priori knowledge, it is not easy to finely control tongue body gestures. The second experiment showed that we gained in performance after a short training session and this suggests that providing visual feedback, even a short one, improves tongue gesture awareness

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning
    corecore