Skip to main content
Article thumbnail
Location of Repository

Automatic adaptation of face models in videophone sequences with more than one person

By Markus Kampmann and Geovanni Martinez


For coding of moving images at low bit rates, an object–based analysis–synthesis coder (OB-ASC) has been introduced[1]. In an OBASC, real objects are described by model objects. A model object is defined by motion, shape and color parameters. These parameters are estimated automatically by image analysis. By the source model of moving 3D objects[2], the shape of a model object is represented by a 3D wireframe. The motion parameters describe translation and rotation of the model object in 3D space. The color parameters denote luminance and chrominance reflectance of the model object surface. Moreover, no a–priori knowledge about the image content is exploited. In typical videophone sequences, head and shoulders of human persons appear in the scene. This knowledge can be exploited in order to improve the modelling accuracy for this kind of scenes. Therefore, OBASC is extended in [3] to a knowledge–based analysis–synthesis coder (KBASC) by adaptation of the 3D face model Candide [4] to a person in the scene. In order to adapt Candide, the positions of eyes and mouth have to be estimated. First, assuming that only one person appears in the scene, the head area is extracted by evaluating the silhouette of the person and assuming a wide upper part of the body and a narrower head. Then, the eyes and mouth positions ar

Year: 1997
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.