1 research outputs found

    Analysis and Encoding of Lip Movements

    No full text
    Model-based image coding has recently attracted much attention as a basis for the next generation of communication services. This article proposes a model-based image coding for the mouth, which is aimed at capturing visual information related to speech, in order to make the decoded video sequence suitable for lip-reading. Such a coding system is basically composed of an analysis process on the transmitting side, and a synthesis process on the receiving side. On the transmitting side, an encoding technique based on a deformable template of the lips is introduced, which allows representing data about the form of the mouth in a very compact way. On the receiving side, a decoding technique for lip movement synthesis is proposed, one that allows lip animation, starting from a reference image, by applying warping techniques to the proposed mode
    corecore