907 research outputs found

    Weighted universal image compression

    Get PDF
    We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Hierarchical quantization indexing for wavelet and wavelet packet image coding

    Get PDF
    In this paper, we introduce the quantization index hierarchy, which is used for efficient coding of quantized wavelet and wavelet packet coefficients. A hierarchical classification map is defined in each wavelet subband, which describes the quantized data through a series of index classes. Going from bottom to the top of the tree, neighboring coefficients are combined to form classes that represent some statistics of the quantization indices of these coefficients. Higher levels of the tree are constructed iteratively by repeating this class assignment to partition the coefficients into larger Subsets. The class assignments are optimized using a rate-distortion cost analysis. The optimized tree is coded hierarchically from top to bottom by coding the class membership information at each level of the tree. Context-adaptive arithmetic coding is used to improve coding efficiency. The developed algorithm produces PSNR results that are better than the state-of-art wavelet-based and wavelet packet-based coders in literature.This research was supported by Isik University BAP-05B302 GrantPublisher's Versio

    Significance linked connected component analysis plus

    Get PDF
    Dr. Xinhua Zhuang, Dissertation Supervisor.Field of Study: Computer Science."May 2018."An image coding algorithm, SLCCA Plus, is introduced in this dissertation. SLCCA Plus is a wavelet-based subband coding method. In wavelet-based subband coding, the input images will go through a wavelet transform and be decomposed into wavelet subband pyramids. Then the characteristics of the wavelet coefficients within and among subbands will be utilized to removing the redundancy. The rest information will be organized and go through entropy encoding. SLCCA Plus contains a series improvement method to the SLCCA. Before SLCCA, there are three top-ranked wavelet image coders. Namely, Embedded Zerotree Wavelet coder (EZW), Morphological Representation of Wavelet Date (MEWD), and Set Partitioning in Hierarchical Trees (SPIHT). They exploit either inter-subband relation among zero wavelet coefficients or within-subband clustering. SLCCA, on the other hand, outperforms these three coders by exploring both the inter- subband coefficients relations and within-subband clustering of significant wavelet coefficients. SLCCA Plus strengthens SLCCA in the following aspects: Intelligence quantization, enhanced cluster filter, potential-significant shared-zero, and improved context models. The purpose of the first three improvements is to remove redundancy information further while keeping the image error as low as possible. As a result, they achieve a better trade-off between bit cost and image quality. Moreover, the improved context lowers the entropy by refining the classification of symbols in cluster sequence and magnitude bit-planes. Lower entropy means the adaptive arithmetic coding can achieve a better coding gain. For performance evaluation, SLCCA Plus is compared to SLCCA and JPEG2000. On average, SLCCA Plus achieves 7% bit saving over JPEG 2000 and 4% over SLCCA. The results comparison shows that SLCCA Plus shows more texture and edge details at a lower bitrate.Includes bibliographical references (pages 88-92)

    Efficient compression of motion compensated residuals

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Design and Optimization of Graph Transform for Image and Video Compression

    Get PDF
    The main contribution of this thesis is the introduction of new methods for designing adaptive transforms for image and video compression. Exploiting graph signal processing techniques, we develop new graph construction methods targeted for image and video compression applications. In this way, we obtain a graph that is, at the same time, a good representation of the image and easy to transmit to the decoder. To do so, we investigate different research directions. First, we propose a new method for graph construction that employs innovative edge metrics, quantization and edge prediction techniques. Then, we propose to use a graph learning approach and we introduce a new graph learning algorithm targeted for image compression that defines the connectivities between pixels by taking into consideration the coding of the image signal and the graph topology in rate-distortion term. Moreover, we also present a new superpixel-driven graph transform that uses clusters of superpixel as coding blocks and then computes the graph transform inside each region. In the second part of this work, we exploit graphs to design directional transforms. In fact, an efficient representation of the image directional information is extremely important in order to obtain high performance image and video coding. In this thesis, we present a new directional transform, called Steerable Discrete Cosine Transform (SDCT). This new transform can be obtained by steering the 2D-DCT basis in any chosen direction. Moreover, we can also use more complex steering patterns than a single pure rotation. In order to show the advantages of the SDCT, we present a few image and video compression methods based on this new directional transform. The obtained results show that the SDCT can be efficiently applied to image and video compression and it outperforms the classical DCT and other directional transforms. Along the same lines, we present also a new generalization of the DFT, called Steerable DFT (SDFT). Differently from the SDCT, the SDFT can be defined in one or two dimensions. The 1D-SDFT represents a rotation in the complex plane, instead the 2D-SDFT performs a rotation in the 2D Euclidean space

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    Localized temporal decorrelation for video compression

    Get PDF
    Many of the current video compression algorithms perform analysis and coding operations in a block-wise manner. Most of them use a motion compensated DCT algorithm as the basis. Many other codecs, mostly academic and in their infancy and known as Second Generation techniques, utilize region and contour based and model based techniques. Unfortunately, these second-generation methods have not been successful in gaining widespread acceptance in both the standards and the consumer world. Many of them require specialized computationally intensive software and/or hardware. Due to these shortcomings, current block based methods have been finetuned to get better performance at even very low bit rates (sub 64 kbps). Block based motion estimation is the principal mechanism used to compensate for motion between frames in an image sequence. Although current algorithms are fast and quite effective, they fail in compensating for uncovered background areas in a frame. Solutions such as hierarchical motion estimation schemes do not work very well since there is no reference in past, and in some cases, future frames for an uncovered background resulting in the block being transmitted as an intra frame (which requires the most bandwidth among all type of blocks). This thesis intro duces an intermediate stage, which compensates for these isolated uncovered areas. The intermediate stage uses a localized decorrelation technique to reduce frame to frame temporal redundancies. The algorithm can be easily incorporated into exist ing systems to achieve an even better performance and can be easily extended as a scalable video coding architecture. Experimental results show that the algorithm, used in conjunction with motion estimation, is quite effective in reducing temporal redundancies
    • 

    corecore