Article thumbnail
Location of Repository

Spatio-Temporal Image-Based Texture Atlases for Dynamic 3-D Models

By Zsolt Janko and Jean-Philippe Pons


International audienceIn this paper, we propose a method for creating a high-quality spatio-temporal texture atlas from a dynamic 3-D model and a set of calibrated video sequences. By adopting an actual spatio-temporal perspective, beyond independent frame-by-frame computations, we fully exploit the very high redundancy in the input video sequences. First, we drastically cut down on the amount of texture data, and thereby we greatly enhance the portability and the rendering efficiency of the model. Second, we gather the numerous different viewpoint/time appearances of the scene, so as to recover from low resolution, grazing views, highlights, shadows and occlusions which affect some regions of the spatio-temporal model. Altogether, our method allows the synthesis of novel views from a small quantity of texture data, with an optimal visual quality throughout the sequence, with minimally visible color discontinuities, and without flickering artifacts. These properties are demonstrated on real datasets

Topics: dynamic 4D models, texturing, graph-cut, ACM: I.: Computing Methodologies/I.4: IMAGE PROCESSING AND COMPUTER VISION/I.4.8: Scene Analysis/I.4.8.0: Color, [INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]
Publisher: 'Institute of Electrical and Electronics Engineers (IEEE)'
Year: 2009
DOI identifier: 10.1109/ICCVW.2009.5457481
OAI identifier: oai:HAL:inria-00435534v1
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • (external link)
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.