1 research outputs found

    Geometry-Assisted Image-based Rendering for Facial Analysis and Synthesis Abstract

    No full text
    In this paper, we present an image-based method for the tracking and rendering of faces. We use the algorithm in an immersive video conferencing system where multiple participants are placed in a common virtual room. This requires viewpoint modification of dynamic objects. Since hair and uncovered areas are difficult to model by pure 3-D geometry-based warping, we add image-based rendering techniques to the system. By interpolating novel views from a 3-D image volume, natural looking results can be achieved. The image-based component is embedded into a geometry-based approach in order to limit the number of images that have to be stored initially for interpolation. Also temporally changing facial features are warped using the approximate geometry information. Both geometry and image cube data are jointly exploited in facial expression analysis and synthesis. Key words: facial animation, image-based rendering, model-based coding, face tracking Image-based rendering is a technique which has received a considerable interest in computer graphics for the realistic rendering of complex scenes. Instead of modeling shape, material, reflection of objects as well as light sources and light exchange with high accuracy and sophisticated physical models, image-based rendering synthesizes new views of a scene by interpolating among multiple images taken with one or multiple cameras. Examples of such approaches are lightfields (1) or concentric mosaics (2; 3). The use of real pictures leads to naturally looking scenes and allow the reproduction of fine structures (e.g., hair, fur, leaves) that are difficult to model with polygonal representations. Also
    corecore