In this paper, a model-based face analysis and synthesis system is presented. The system, named VR-Face,
tracks and estimates one's 3D head motion in real time, and represents the estimated motion with a pre-rendered
3D texture-mapped head model. Initially, a user has to identify two eyes and one nostril on the screen for
tracking. In this way, the background can be complex, and even dynamic. When the system fails to follow up
one's head motion, it prompts the user with a box indicating the original face position to recover itself from
tracking errors. The overall performance, including both analysis and synthesis, is above 25 frames/sec on a PC
with a 400MHz Pentinum II-MMX CPU. The system has been demonstrated under different lighting conditions
with different low-price PC cameras