460 research outputs found
Stack-run adaptive wavelet image compression
We report on the development of an adaptive wavelet image coder based on stack-run representation of the quantized coefficients. The coder works by selecting an optimal wavelet packet basis for the given image and encoding the quantization indices for significant coefficients and zero runs between coefficients using a 4-ary arithmetic coder. Due to the fact that our coder exploits the redundancies present within individual subbands, its addressing complexity is much lower than that of the wavelet zerotree coding algorithms. Experimental results show coding gains of up to 1:4dB over the benchmark wavelet coding algorithm
Results on optimal biorthogonal filter banks
Optimization of filter banks for specific input statistics has been of interest in the theory and practice of subband coding. For the case of orthonormal filter banks with infinite order and uniform decimation, the problem has been completely solved in recent years. For the case of biorthogonal filter banks, significant progress has been made recently, although a number of issues still remain to be addressed. In this paper we briefly review the orthonormal case, and then present several new results for the biorthogonal case. All discussions pertain to the infinite order (ideal filter) case. The current status of research as well as some of the unsolved problems are described
Spherical coding algorithm for wavelet image compression
PubMed ID: 19342336In recent literature, there exist many high-performance wavelet coders that use different spatially adaptive coding techniques in order to exploit the spatial energy compaction property of the wavelet transform. Two crucial issues in adaptive methods are the level of flexibility and the coding efficiency achieved while modeling different image regions and allocating bitrate within the wavelet subbands. In this paper, we introduce the "spherical coder," which provides a new adaptive framework for handling these issues in a simple and effective manner. The coder uses local energy as a direct measure to differentiate between parts of the wavelet subband and to decide how to allocate the available bitrate. As local energy becomes available at finer resolutions, i.e., in smaller size windows, the coder automatically updates its decisions about how to spend the bitrate. We use a hierarchical set of variables to specify and code the local energy up to the highest resolution, i.e., the energy of individual wavelet coefficients. The overall scheme is nonredundant, meaning that the subband information is conveyed using this equivalent set of variables without the need for any side parameters. Despite its simplicity, the algorithm produces PSNR results that are competitive with the state-of-art coders in literature.Publisher's VersionAuthor Post Prin
Weighted universal image compression
We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB
Streaming an image through the eye: The retina seen as a dithered scalable image coder
We propose the design of an original scalable image coder/decoder that is
inspired from the mammalians retina. Our coder accounts for the time-dependent
and also nondeterministic behavior of the actual retina. The present work
brings two main contributions: As a first step, (i) we design a deterministic
image coder mimicking most of the retinal processing stages and then (ii) we
introduce a retinal noise in the coding process, that we model here as a dither
signal, to gain interesting perceptual features. Regarding our first
contribution, our main source of inspiration will be the biologically plausible
model of the retina called Virtual Retina. The main novelty of this coder is to
show that the time-dependent behavior of the retina cells could ensure, in an
implicit way, scalability and bit allocation. Regarding our second
contribution, we reconsider the inner layers of the retina. We emit a possible
interpretation for the non-determinism observed by neurophysiologists in their
output. For this sake, we model the retinal noise that occurs in these layers
by a dither signal. The dithering process that we propose adds several
interesting features to our image coder. The dither noise whitens the
reconstruction error and decorrelates it from the input stimuli. Furthermore,
integrating the dither noise in our coder allows a faster recognition of the
fine details of the image during the decoding process. Our present paper goal
is twofold. First, we aim at mimicking as closely as possible the retina for
the design of a novel image coder while keeping encouraging performances.
Second, we bring a new insight concerning the non-deterministic behavior of the
retina.Comment: arXiv admin note: substantial text overlap with arXiv:1104.155
A novel entropy-constrained adaptive quantization scheme for wavelet pyramid image coding
The orthogonal wavelet transform with filters of nonlinear phase gives poor visual results in low bit rate image coding. The biorthogonal wavelet is a good substitute, which is, however essentially nonorthogonal. A greedy steepest descent algorithm is proposed to design an adaptive quantization scheme based on the actual statistics of the input image. Since the L2 norm of the quantization error is not preserved through the nonorthogonal transform, a quantization error estimation formula considering the characteristic value of the reconstruction filters is derived to incorporate the adaptive quantization scheme. Computer simulation results demonstrate significant SNR gains over standard coding techniques, and comparable visual improvements.published_or_final_versio
Subband Image Coding with Jointly Optimized Quantizers
An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional
Hierarchical quantization indexing for wavelet and wavelet packet image coding
In this paper, we introduce the quantization index hierarchy, which is used for efficient coding of quantized wavelet and wavelet packet coefficients. A hierarchical classification map is defined in each wavelet subband, which describes the quantized data through a series of index classes. Going from bottom to the top of the tree, neighboring coefficients are combined to form classes that represent some statistics of the quantization indices of these coefficients. Higher levels of the tree are constructed iteratively by repeating this class assignment to partition the coefficients into larger Subsets. The class assignments are optimized using a rate-distortion cost analysis. The optimized tree is coded hierarchically from top to bottom by coding the class membership information at each level of the tree. Context-adaptive arithmetic coding is used to improve coding efficiency. The developed algorithm produces PSNR results that are better than the state-of-art wavelet-based and wavelet packet-based coders in literature.This research was supported by Isik University BAP-05B302 GrantPublisher's Versio
- …