research

Motion tubes for the representation of images sequences

Abstract

International audienceIn this paper, we introduce a novel way to represent an image sequence, which naturally exhibits the temporal persistence of the textures. Standardized representations have been thoroughly optimized, and getting significant improvements has become more and more difficult. As an alternative, Analysis-Synthesis (AS) coders have focused on the use of texture within a video coder. We introduce here a new AS representation of image sequences that remains close to the classic block-based representation. By tracking textures throughout the sequence, we propose to reconstruct it from a set of moving textures which we call motion tubes. A new motion model is then proposed, which allows for motion field continuities and discontinuities, by hybridizing Block Matching and a low-computational mesh-based representation. Finally, we propose a bi-predictional framework for motion tubes management

    Similar works