1,014 research outputs found
Improved bounds for the rate loss of multiresolution source codes
We present new bounds for the rate loss of multiresolution source codes (MRSCs). Considering an M-resolution code, the rate loss at the ith resolution with distortion D/sub i/ is defined as L/sub i/=R/sub i/-R(D/sub i/), where R/sub i/ is the rate achievable by the MRSC at stage i. This rate loss describes the performance degradation of the MRSC compared to the best single-resolution code with the same distortion. For two-resolution source codes, there are three scenarios of particular interest: (i) when both resolutions are equally important; (ii) when the rate loss at the first resolution is 0 (L/sub 1/=0); (iii) when the rate loss at the second resolution is 0 (L/sub 2/=0). The work of Lastras and Berger (see ibid., vol.47, p.918-26, Mar. 2001) gives constant upper bounds for the rate loss of an arbitrary memoryless source in scenarios (i) and (ii) and an asymptotic bound for scenario (iii) as D/sub 2/ approaches 0. We focus on the squared error distortion measure and (a) prove that for scenario (iii) L/sub 1/<1.1610 for all D/sub 2/<0.7250; (c) tighten the Lastras-Berger bound for scenario (i) from L/sub i//spl les/1/2 to L/sub i/<0.3802, i/spl isin/{1,2}; and (d) generalize the bounds for scenarios (ii) and (iii) to M-resolution codes with M/spl ges/2. We also present upper bounds for the rate losses of additive MRSCs (AMRSCs). An AMRSC is a special MRSC where each resolution describes an incremental reproduction and the kth-resolution reconstruction equals the sum of the first k incremental reproductions. We obtain two bounds on the rate loss of AMRSCs: one primarily good for low-rate coding and another which depends on the source entropy
Multiresolution vector quantization
Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes
On the rate loss of multiple description source codes
The rate loss of a multiresolution source code (MRSC) describes the difference between the rate needed to achieve distortion D/sub i/ in resolution i and the rate-distortion function R(D/sub i/). This paper generalizes the rate loss definition to multiple description source codes (MDSCs) and bounds the MDSC rate loss for arbitrary memoryless sources. For a two-description MDSC (2DSC), the rate loss of description i with distortion D/sub i/ is defined as L/sub i/=R/sub i/-R(D/sub i/), i=1,2, where R/sub i/ is the rate of the ith description; the joint rate loss associated with decoding the two descriptions together to achieve central distortion D/sub 0/ is measured either as L/sub 0/=R/sub 1/+R/sub 2/-R(D/sub 0/) or as L/sub 12/=L/sub 1/+L/sub 2/. We show that for any memoryless source with variance /spl sigma//sup 2/, there exists a 2DSC for that source with L/sub 1//spl les/1/2 or L/sub 2//spl les/1/2 and a) L/sub 0//spl les/1 if D/sub 0//spl les/D/sub 1/+D/sub 2/-/spl sigma//sup 2/, b) L/sub 12//spl les/1 if 1/D/sub 0//spl les/1/D/sub 1/+1/D/sub 2/-1//spl sigma//sup 2/, c) L/sub 0//spl les/L/sub G0/+1.5 and L/sub 12//spl les/L/sub G12/+1 otherwise, where L/sub G0/ and L/sub G12/ are the joint rate losses of a Gaussian source with variance /spl sigma//sup 2/
Network vector quantization
We present an algorithm for designing locally optimal vector quantizers for general networks. We discuss the algorithm's implementation and compare the performance of the resulting "network vector quantizers" to traditional vector quantizers (VQs) and to rate-distortion (R-D) bounds where available. While some special cases of network codes (e.g., multiresolution (MR) and multiple description (MD) codes) have been studied in the literature, we here present a unifying approach that both includes these existing solutions as special cases and provides solutions to previously unsolved examples
Quantization as Histogram Segmentation: Optimal Scalar Quantizer Design in Network Systems
An algorithm for scalar quantizer design on discrete-alphabet sources is proposed. The proposed algorithm can be used to design fixed-rate and entropy-constrained conventional scalar quantizers, multiresolution scalar quantizers, multiple description scalar quantizers, and Wyner–Ziv scalar quantizers. The algorithm guarantees globally optimal solutions for conventional fixed-rate scalar quantizers and entropy-constrained scalar quantizers. For the other coding scenarios, the algorithm yields the best code among all codes that meet a given convexity constraint. In all cases, the algorithm run-time is polynomial in the size of the source alphabet. The algorithm derivation arises from a demonstration of the connection between scalar quantization, histogram segmentation, and the shortest path problem in a certain directed acyclic graph
Side-information Scalable Source Coding
The problem of side-information scalable (SI-scalable) source coding is
considered in this work, where the encoder constructs a progressive
description, such that the receiver with high quality side information will be
able to truncate the bitstream and reconstruct in the rate distortion sense,
while the receiver with low quality side information will have to receive
further data in order to decode. We provide inner and outer bounds for general
discrete memoryless sources. The achievable region is shown to be tight for the
case that either of the decoders requires a lossless reconstruction, as well as
the case with degraded deterministic distortion measures. Furthermore we show
that the gap between the achievable region and the outer bounds can be bounded
by a constant when square error distortion measure is used. The notion of
perfectly scalable coding is introduced as both the stages operate on the
Wyner-Ziv bound, and necessary and sufficient conditions are given for sources
satisfying a mild support condition. Using SI-scalable coding and successive
refinement Wyner-Ziv coding as basic building blocks, a complete
characterization is provided for the important quadratic Gaussian source with
multiple jointly Gaussian side-informations, where the side information quality
does not have to be monotonic along the scalable coding order. Partial result
is provided for the doubly symmetric binary source with Hamming distortion when
the worse side information is a constant, for which one of the outer bound is
strictly tighter than the other one.Comment: 35 pages, submitted to IEEE Transaction on Information Theor
Deep Hierarchical Super-Resolution for Scientific Data Reduction and Visualization
We present an approach for hierarchical super resolution (SR) using neural
networks on an octree data representation. We train a hierarchy of neural
networks, each capable of 2x upscaling in each spatial dimension between two
levels of detail, and use these networks in tandem to facilitate large scale
factor super resolution, scaling with the number of trained networks. We
utilize these networks in a hierarchical super resolution algorithm that
upscales multiresolution data to a uniform high resolution without introducing
seam artifacts on octree node boundaries. We evaluate application of this
algorithm in a data reduction framework by dynamically downscaling input data
to an octree-based data structure to represent the multiresolution data before
compressing for additional storage reduction. We demonstrate that our approach
avoids seam artifacts common to multiresolution data formats, and show how
neural network super resolution assisted data reduction can preserve global
features better than compressors alone at the same compression ratios
Spread spectrum-based video watermarking algorithms for copyright protection
Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can
now benefit from hardware and software which was considered state-of-the-art several years
ago. The advantages offered by the digital technologies are major but the same digital
technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly
possible and relatively easy, in spite of various forms of protection, but due to the analogue
environment, the subsequent copies had an inherent loss in quality. This was a natural way of
limiting the multiple copying of a video material. With digital technology, this barrier
disappears, being possible to make as many copies as desired, without any loss in quality
whatsoever. Digital watermarking is one of the best available tools for fighting this threat.
The aim of the present work was to develop a digital watermarking system compliant with the
recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark
can be inserted in either spatial domain or transform domain, this aspect was investigated and
led to the conclusion that wavelet transform is one of the best solutions available. Since
watermarking is not an easy task, especially considering the robustness under various attacks
several techniques were employed in order to increase the capacity/robustness of the system:
spread-spectrum and modulation techniques to cast the watermark, powerful error correction
to protect the mark, human visual models to insert a robust mark and to ensure its invisibility.
The combination of these methods led to a major improvement, but yet the system wasn't
robust to several important geometrical attacks. In order to achieve this last milestone, the
system uses two distinct watermarks: a spatial domain reference watermark and the main
watermark embedded in the wavelet domain. By using this reference watermark and techniques
specific to image registration, the system is able to determine the parameters of the attack and
revert it. Once the attack was reverted, the main watermark is recovered. The final result is a
high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen
- …