941 research outputs found
Object-based video representations: shape compression and object segmentation
Object-based video representations are considered to be useful for easing the process of multimedia content production and enhancing user interactivity in multimedia productions. Object-based video presents several new technical challenges, however.
Firstly, as with conventional video representations, compression of the video data is a
requirement. For object-based representations, it is necessary to compress the shape of
each video object as it moves in time. This amounts to the compression of moving
binary images. This is achieved by the use of a technique called context-based
arithmetic encoding. The technique is utilised by applying it to rectangular pixel blocks and as such it is consistent with the standard tools of video compression. The blockbased application also facilitates well the exploitation of temporal redundancy in the sequence of binary shapes. For the first time, context-based arithmetic encoding is used in conjunction with motion compensation to provide inter-frame compression. The method, described in this thesis, has been thoroughly tested throughout the MPEG-4 core experiment process and due to favourable results, it has been adopted as part of the MPEG-4 video standard.
The second challenge lies in the acquisition of the video objects. Under normal conditions, a video sequence is captured as a sequence of frames and there is no inherent information about what objects are in the sequence, not to mention information relating to the shape of each object. Some means for segmenting semantic objects from general video sequences is required. For this purpose, several image analysis tools may be of help and in particular, it is believed that video object tracking algorithms will be important. A new tracking algorithm is developed based on piecewise polynomial motion representations and statistical estimation tools, e.g. the expectationmaximisation method and the minimum description length principle
Offline and Online Optical Flow Enhancement for Deep Video Compression
Video compression relies heavily on exploiting the temporal redundancy
between video frames, which is usually achieved by estimating and using the
motion information. The motion information is represented as optical flows in
most of the existing deep video compression networks. Indeed, these networks
often adopt pre-trained optical flow estimation networks for motion estimation.
The optical flows, however, may be less suitable for video compression due to
the following two factors. First, the optical flow estimation networks were
trained to perform inter-frame prediction as accurately as possible, but the
optical flows themselves may cost too many bits to encode. Second, the optical
flow estimation networks were trained on synthetic data, and may not generalize
well enough to real-world videos. We address the twofold limitations by
enhancing the optical flows in two stages: offline and online. In the offline
stage, we fine-tune a trained optical flow estimation network with the motion
information provided by a traditional (non-deep) video compression scheme, e.g.
H.266/VVC, as we believe the motion information of H.266/VVC achieves a better
rate-distortion trade-off. In the online stage, we further optimize the latent
features of the optical flows with a gradient descent-based algorithm for the
video to be compressed, so as to enhance the adaptivity of the optical flows.
We conduct experiments on a state-of-the-art deep video compression scheme,
DCVC. Experimental results demonstrate that the proposed offline and online
enhancement together achieves on average 12.8% bitrate saving on the tested
videos, without increasing the model or computational complexity of the decoder
side.Comment: 9 pages, 6 figure
- âŠ