17,614 research outputs found

    Weighted universal image compression

    Get PDF
    We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB

    Optimal modeling for complex system design

    Get PDF
    The article begins with a brief introduction to the theory describing optimal data compression systems and their performance. A brief outline is then given of a representative algorithm that employs these lessons for optimal data compression system design. The implications of rate-distortion theory for practical data compression system design is then described, followed by a description of the tensions between theoretical optimality and system practicality and a discussion of common tools used in current algorithms to resolve these tensions. Next, the generalization of rate-distortion principles to the design of optimal collections of models is presented. The discussion focuses initially on data compression systems, but later widens to describe how rate-distortion theory principles generalize to model design for a wide variety of modeling applications. The article ends with a discussion of the performance benefits to be achieved using the multiple-model design algorithms

    On the rate-distortion performance and computational efficiency of the Karhunen-Loeve transform for lossy data compression

    Get PDF
    We examine the rate-distortion performance and computational complexity of linear transforms for lossy data compression. The goal is to better understand the performance/complexity tradeoffs associated with using the Karhunen-Loeve transform (KLT) and its fast approximations. Since the optimal transform for transform coding is unknown in general, we investigate the performance penalties associated with using the KLT by examining cases where the KLT fails, developing a new transform that corrects the KLT's failures in those examples, and then empirically testing the performance difference between this new transform and the KLT. Experiments demonstrate that while the worst KLT can yield transform coding performance at least 3 dB worse than that of alternative block transforms, the performance penalty associated with using the KLT on real data sets seems to be significantly smaller, giving at most 0.5 dB difference in our experiments. The KLT and its fast variations studied here range in complexity requirements from O(n^2) to O(n log n) in coding vectors of dimension n. We empirically investigate the rate-distortion performance tradeoffs associated with traversing this range of options. For example, an algorithm with complexity O(n^3/2) and memory O(n) gives 0.4 dB performance loss relative to the full KLT in our image compression experiment

    Separable Karhunen Loeve transforms for the weighted universal transform coding algorithm

    Get PDF
    The weighted universal transform code (WUTC) is a two-stage transform code that replaces JPEG's single, non-optimal transform code with a jointly designed collection of transform codes to achieve good performance across a broader class of possible sources. Unfortunately, the performance gains of WUTC are achieved at the expense of significant increases in computational complexity and larger codes. We here present a faster, more space-efficient WUTC algorithm. The new algorithm uses separable coding instead of direct KLT. While separable coding gives performance comparable to that of WUTC, it uses only 1/8 of the floating-point multiplications and 1/32 of storage of direct KLT. Experimental results included in this work compare the performance of new separable WUTC with both the WUTC and other fast variations of that algorithm

    Weighted universal bit allocation: optimal multiple quantization matrix coding

    Get PDF
    We introduce a two-stage bit allocation algorithm analogous to the algorithm for weighted universal vector quantization (WUVQ). The encoder uses a collection of possible bit allocations (typically in the form of a collection of quantization matrices) rather than a single bit allocation (or single quantization matrix). We describe both an encoding algorithm for achieving optimal compression using a collection of bit allocations and a technique for designing locally optimal collections of bit allocations. We demonstrate performance on a JPEG style coder using the mean squared error (MSE) distortion measure. On a sequence of medical brain scans, the algorithm achieves up to 2.5 dB improvement over a single bit allocation system, up to 5 dB improvement over a WUVQ with first- and second-stage vector dimensions equal to 16 and 4 respectively, and up to 12 dB improvement over an entropy constrained vector quantizer (ECVQ) using 4 dimensional vectors
    • …
    corecore