7,134 research outputs found
Weighted universal image compression
We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB
Low power techniques for video compression
This paper gives an overview of low-power techniques proposed in the literature for mobile multimedia and Internet applications. Exploitable aspects are discussed in the behavior of different video compression tools. These power-efficient solutions are then classified by synthesis domain and level of abstraction. As this paper is meant to be a starting point for further research in the area, a lowpower hardware & software co-design methodology is outlined in the end as a possible scenario for video-codec-on-a-chip implementations on future mobile multimedia platforms
Source Camera Verification from Strongly Stabilized Videos
Image stabilization performed during imaging and/or post-processing poses one
of the most significant challenges to photo-response non-uniformity based
source camera attribution from videos. When performed digitally, stabilization
involves cropping, warping, and inpainting of video frames to eliminate
unwanted camera motion. Hence, successful attribution requires the inversion of
these transformations in a blind manner. To address this challenge, we
introduce a source camera verification method for videos that takes into
account the spatially variant nature of stabilization transformations and
assumes a larger degree of freedom in their search. Our method identifies
transformations at a sub-frame level, incorporates a number of constraints to
validate their correctness, and offers computational flexibility in the search
for the correct transformation. The method also adopts a holistic approach in
countering disruptive effects of other video generation steps, such as video
coding and downsizing, for more reliable attribution. Tests performed on one
public and two custom datasets show that the proposed method is able to verify
the source of 23-30% of all videos that underwent stronger stabilization,
depending on computation load, without a significant impact on false
attribution
- …