6 research outputs found

    A 3d talking head for mobile devices based on unofficial ios webgl support

    Get PDF
    In this paper we present the implementation of a WebGL Talking Head for iOS mobile devices (Apple iPhone and iPad). It works on standard MPEG-4 Facial Animation Parameters (FAPs) and speaks with the Italian version of FESTIVAL TTS. It is totally based on true real human data. The 3D kinematics information are used to create lips articulatory model and to drive directly the talking face, generating human facial movements. In the last year we developed the WebGL version of the avatar. WebGL, which is 3D graphic for the web, is currently supported in the major web browsers for desktop computers. No official support has been given for mobile device main platforms yet, although the Firefox beta version enables it on android phones. Starting from iOS 5 WebGL is enabled only for the advertisement library class (which is intended for placing ad-banners in applications). We have been able to use this feature to visualize and animate our WebGL talking head

    LUCIA: An open source 3D expressive avatar for multimodal h.m.i.

    Get PDF
    LUCIA is an MPEG-4 facial animation system developed at ISTC-CNR . It works on standard Facial Animation Parameters and speaks with the Italian version of FESTIVAL TTS. To achieve an emotive/expressive talking head LUCIA was build from real human data physically extracted by ELITE optotracking movement analyzer. LUCIA can copy a real human by reproducing the movements of passive markers positioned on his face and recorded by the ELITE device or can be driven by an emotional XML tagged input text, thus realizing a true audio/visual emotive/expressive synthesis. Synchronization between visual and audio data is very important in order to create the correct WAV and FAP files needed for the animation. LUCIA\u27s voice is based on the ISTC Italian version of FESTIVAL-MBROLA packages, modified by means of an appropriate APML/VSML tagged language. LUCIA is available in two dif-ferent versions: an open source framework and the "work in progress" WebGL

    INTERFACE Toolkit: A New Tool for Building IVAs

    Full text link

    Tongue control and its implication in pronunciation training

    Get PDF
    International audiencePronunciation training based on speech production techniques illustrating tongue movements is gaining popularity. However, there is not sufficient evidence that learners can imitate some tongue animation. In this paper, we argue that although controlling tongue movement related to speech is not such an easy task, training with visual feedback improves its control. We investigated human awareness of controlling their tongue body gestures. In a first experiment, participants were asked to perform some tongue movements composed of two sets of gestures. This task was evaluated by observing ultrasound imaging of the tongue recorded during the experiment. No feedback was provided. In a second experiment, a short session of training was added where participants can observe ultrasound imaging in real time of their own tongue movements. The goal was to increase their awareness of their tongue gestures. A pretest and posttest were carried out without any feedback. The results suggest that without a priori knowledge, it is not easy to finely control tongue body gestures. The second experiment showed that we gained in performance after a short training session and this suggests that providing visual feedback, even a short one, improves tongue gesture awareness

    Trainable Articulatory Control Models for Visual Speech Synthesis

    Full text link
    corecore