1 research outputs found

    ISCA Archive Evaluation of a Talking Head based on Appearance Models

    No full text
    In this paper we describe how 2D appearance models can be applied to the problem of creating a near-videorealistic talking head. A speech corpus of a talker uttering a set of phonetically balanced training sentences is analysed using a generative model of the human face. Segments of original parameter trajectories corresponding to the synthesis unit are extracted from a codebook, normalised, blended, concatenated and smoothed before being applied to the model to give natural, realistic animations of novel utterances. We also present some early results of subjective tests conducted to determine the realism of the synthesiser. 1. Background It is well known that speech is a multi-modal form of communication; seeing the face of the talker provides additional informatio
    corecore