192 research outputs found

    Some new developments in image compression

    Get PDF
    This study is divided into two parts. The first part involves an investigation of near-lossless compression of digitized images using the entropy-coded DPCM method with a large number of quantization levels. Through the investigation, a new scheme that combines both lossy and lossless DPCM methods into a common framework is developed. This new scheme uses known results on the design of predictors and quantizers that incorporate properties of human visual perception. In order to enhance the compression performance of the scheme, an adaptively generated source model with multiple contexts is employed for the coding of the quantized prediction errors, rather than a memoryless model as in the conventional DPCM method. Experiments show that the scheme can provide compression in the range from 4 to 11 with a peak SNR of about 50 dB for 8-bit medical images. Also, the use of multiple contexts is found to improve compression performance by about 25% to 35%;The second part of the study is devoted to the problem of lossy image compression using tree-structured vector quantization. As a result of the study, a new design method for codebook generation is developed together with four different implementation algorithms. In the new method, an unbalanced tree-structured vector codebook is designed in a greedy fashion under the constraint of rate-distortion trade-off which can then be used to implement a variable-rate compression system. From experiments, it is found that the new method can achieve a very good rate-distortion performance while being computationally efficient. Also, due to the tree-structure of the codebook, the new method is amenable to progressive transmission applications

    Perceptual compression of magnitude-detected synthetic aperture radar imagery

    Get PDF
    A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp

    Conjoint probabilistic subband modeling

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1997.Includes bibliographical references (leaves 125-133).by Ashok Chhabedia Popat.Ph.D

    1994 Science Information Management and Data Compression Workshop

    Get PDF
    This document is the proceedings from the 'Science Information Management and Data Compression Workshop,' which was held on September 26-27, 1994, at the NASA Goddard Space Flight Center, Greenbelt, Maryland. The Workshop explored promising computational approaches for handling the collection, ingestion, archival and retrieval of large quantities of data in future Earth and space science missions. It consisted of eleven presentations covering a range of information management and data compression approaches that are being or have been integrated into actual or prototypical Earth or space science data information systems, or that hold promise for such an application. The workshop was organized by James C. Tilton and Robert F. Cromp of the NASA Goddard Space Flight Center

    Efficient compression of motion compensated residuals

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Image compression techniques using vector quantization

    Get PDF

    High efficiency block coding techniques for image data.

    Get PDF
    by Lo Kwok-tung.Thesis (Ph.D.)--Chinese University of Hong Kong, 1992.Includes bibliographical references.ABSTRACT --- p.iACKNOWLEDGEMENTS --- p.iiiLIST OF PRINCIPLE SYMBOLS AND ABBREVIATIONS --- p.ivLIST OF FIGURES --- p.viiLIST OF TABLES --- p.ixTABLE OF CONTENTS --- p.xChapter CHAPTER 1 --- IntroductionChapter 1.1 --- Background - The Need for Image Compression --- p.1-1Chapter 1.2 --- Image Compression - An Overview --- p.1-2Chapter 1.2.1 --- Predictive Coding - DPCM --- p.1-3Chapter 1.2.2 --- Sub-band Coding --- p.1-5Chapter 1.2.3 --- Transform Coding --- p.1-6Chapter 1.2.4 --- Vector Quantization --- p.1-8Chapter 1.2.5 --- Block Truncation Coding --- p.1-10Chapter 1.3 --- Block Based Image Coding Techniques --- p.1-11Chapter 1.4 --- Goal of the Work --- p.1-13Chapter 1.5 --- Organization of the Thesis --- p.1-14Chapter CHAPTER 2 --- Block-Based Image Coding TechniquesChapter 2.1 --- Statistical Model of Image --- p.2-1Chapter 2.1.1 --- One-Dimensional Model --- p.2-1Chapter 2.1.2 --- Two-Dimensional Model --- p.2-2Chapter 2.2 --- Image Fidelity Criteria --- p.2-3Chapter 2.2.1 --- Objective Fidelity --- p.2-3Chapter 2.2.2 --- Subjective Fidelity --- p.2-5Chapter 2.3 --- Transform Coding Theroy --- p.2-6Chapter 2.3.1 --- Transformation --- p.2-6Chapter 2.3.2 --- Quantization --- p.2-10Chapter 2.3.3 --- Coding --- p.2-12Chapter 2.3.4 --- JPEG International Standard --- p.2-14Chapter 2.4 --- Vector Quantization Theory --- p.2-18Chapter 2.4.1 --- Codebook Design and the LBG Clustering Algorithm --- p.2-20Chapter 2.5 --- Block Truncation Coding Theory --- p.2-22Chapter 2.5.1 --- Optimal MSE Block Truncation Coding --- p.2-24Chapter CHAPTER 3 --- Development of New Orthogonal TransformsChapter 3.1 --- Introduction --- p.3-1Chapter 3.2 --- Weighted Cosine Transform --- p.3-4Chapter 3.2.1 --- Development of the WCT --- p.3-6Chapter 3.2.2 --- Determination of a and β --- p.3-9Chapter 3.3 --- Simplified Cosine Transform --- p.3-10Chapter 3.3.1 --- Development of the SCT --- p.3-11Chapter 3.4 --- Fast Computational Algorithms --- p.3-14Chapter 3.4.1 --- Weighted Cosine Transform --- p.3-14Chapter 3.4.2 --- Simplified Cosine Transform --- p.3-18Chapter 3.4.3 --- Computational Requirement --- p.3-19Chapter 3.5 --- Performance Evaluation --- p.3-21Chapter 3.5.1 --- Evaluation using Statistical Model --- p.3-21Chapter 3.5.2 --- Evaluation using Real Images --- p.3-28Chapter 3.6 --- Concluding Remarks --- p.3-31Chapter 3.7 --- Note on Publications --- p.3-32Chapter CHAPTER 4 --- Pruning in Transform Coding of ImagesChapter 4.1 --- Introduction --- p.4-1Chapter 4.2 --- "Direct Fast Algorithms for DCT, WCT and SCT" --- p.4-3Chapter 4.2.1 --- Discrete Cosine Transform --- p.4-3Chapter 4.2.2 --- Weighted Cosine Transform --- p.4-7Chapter 4.2.3 --- Simplified Cosine Transform --- p.4-9Chapter 4.3 --- Pruning in Direct Fast Algorithms --- p.4-10Chapter 4.3.1 --- Discrete Cosine Transform --- p.4-10Chapter 4.3.2 --- Weighted Cosine Transform --- p.4-13Chapter 4.3.3 --- Simplified Cosine Transform --- p.4-15Chapter 4.4 --- Operations Saved by Using Pruning --- p.4-17Chapter 4.4.1 --- Discrete Cosine Transform --- p.4-17Chapter 4.4.2 --- Weighted Cosine Transform --- p.4-21Chapter 4.4.3 --- Simplified Cosine Transform --- p.4-23Chapter 4.4.4 --- Generalization Pruning Algorithm for DCT --- p.4-25Chapter 4.5 --- Concluding Remarks --- p.4-26Chapter 4.6 --- Note on Publications --- p.4-27Chapter CHAPTER 5 --- Efficient Encoding of DC Coefficient in Transform Coding SystemsChapter 5.1 --- Introduction --- p.5-1Chapter 5.2 --- Minimum Edge Difference (MED) Predictor --- p.5-3Chapter 5.3 --- Performance Evaluation --- p.5-6Chapter 5.4 --- Simulation Results --- p.5-9Chapter 5.5 --- Concluding Remarks --- p.5-14Chapter 5.6 --- Note on Publications --- p.5-14Chapter CHAPTER 6 --- Efficient Encoding Algorithms for Vector Quantization of ImagesChapter 6.1 --- Introduction --- p.6-1Chapter 6.2 --- Sub-Codebook Searching Algorithm (SCS) --- p.6-4Chapter 6.2.1 --- Formation of the Sub-codebook --- p.6-6Chapter 6.2.2 --- Premature Exit Conditions in the Searching Process --- p.6-8Chapter 6.2.3 --- Sub-Codebook Searching Algorithm --- p.6-11Chapter 6.3 --- Predictive Sub-Codebook Searching Algorithm (PSCS) --- p.6-13Chapter 6.4 --- Simulation Results --- p.6-17Chapter 6.5 --- Concluding Remarks --- p.5-20Chapter 6.6 --- Note on Publications --- p.6-21Chapter CHAPTER 7 --- Predictive Classified Address Vector Quantization of ImagesChapter 7.1 --- Introduction --- p.7-1Chapter 7.2 --- Optimal Three-Level Block Truncation Coding --- p.7-3Chapter 7.3 --- Predictive Classified Address Vector Quantization --- p.7-5Chapter 7.3.1 --- Classification of Images using Three-level BTC --- p.7-6Chapter 7.3.2 --- Predictive Mean Removal Technique --- p.7-8Chapter 7.3.3 --- Simplified Address VQ Technique --- p.7-9Chapter 7.3.4 --- Encoding Process of PCAVQ --- p.7-13Chapter 7.4 --- Simulation Results --- p.7-14Chapter 7.5 --- Concluding Remarks --- p.7-18Chapter 7.6 --- Note on Publications --- p.7-18Chapter CHAPTER 8 --- Recapitulation and Topics for Future InvestigationChapter 8.1 --- Recapitulation --- p.8-1Chapter 8.2 --- Topics for Future Investigation --- p.8-3REFERENCES --- p.R-1APPENDICESChapter A. --- Statistics of Monochrome Test Images --- p.A-lChapter B. --- Statistics of Color Test Images --- p.A-2Chapter C. --- Fortran Program Listing for the Pruned Fast DCT Algorithm --- p.A-3Chapter D. --- Training Set Images for Building the Codebook of Standard VQ Scheme --- p.A-5Chapter E. --- List of Publications --- p.A-

    Depth-Map Image Compression Based on Region and Contour Modeling

    Get PDF
    In this thesis, the problem of depth-map image compression is treated. The compilation of articles included in the thesis provides methodological contributions in the fields of lossless and lossy compression of depth-map images.The first group of methods addresses the lossless compression problem. The introduced methods are using the approach of representing the depth-map image in terms of regions and contours. In the depth-map image, a segmentation defines the regions, by grouping pixels having similar properties, and separates them using (region) contours. The depth-map image is encoded by the contours and the auxiliary information needed to reconstruct the depth values in each region.One way of encoding the contours is to describe them using two matrices of horizontal and vertical contour edges. The matrices are encoded using template context coding where each context tree is optimally pruned. In certain contexts, the contour edges are found deterministically using only the currently available information. Another way of encoding the contours is to describe them as a sequence of contour segments. Each such segment is defined by an anchor (starting) point and a string of contour edges, equivalent to a string of chain-code symbols. Here we propose efficient ways to select and encode the anchor points and to generate contour segments by using a contour crossing point analysis and by imposing rules that help in minimizing the number of anchor points.The regions are reconstructed at the decoder using predictive coding or the piecewise constant model representation. In the first approach, the large constant regions are found and one depth value is encoded for each such region. For the rest of the image, suitable regions are generated by constraining the local variation of the depth level from one pixel to another. The nonlinear predictors selected specifically for each region are combining the results of several linear predictors, each fitting optimally a subset of pixels belonging to the local neighborhood. In the second approach, the depth value of a given region is encoded using the depth values of the neighboring regions already encoded. The natural smoothness of the depth variation and the mutual exclusiveness of the values in neighboring regions are exploited to efficiently predict and encode the current region's depth value.The second group of methods is studying the lossy compression problem. In a first contribution, different segmentations are generated by varying the threshold for the depth local variability. A lossy depth-map image is obtained for each segmentation and is encoded based on predictive coding, quantization and context tree coding. In another contribution, the lossy versions of one image are created either by successively merging the constant regions of the original image, or by iteratively splitting the regions of a template image using horizontal or vertical line segments. Merging and splitting decisions are greedily taken, according to the best slope towards the next point in the rate-distortion curve. An entropy coding algorithm is used to encode each image.We propose also a progressive coding method for coding the sequence of lossy versions of a depth-map image. The bitstream is encoded so that any lossy version of the original image is generated, starting from a very low resolution up to lossless reconstruction. The partitions of the lossy versions into regions are assumed to be nested so that a higher resolution image is obtained by splitting some regions of a lower resolution image. A current image in the sequence is encoded using the a priori information from a previously encoded image: the anchor points are encoded relative to the already encoded contour points; the depth information of the newly resulting regions is recovered using the depth value of the parent region.As a final contribution, the dissertation includes a study of the parameterization of planar models. The quantized heights at three-pixel locations are used to compute the optimal plane for each region. The three-pixel locations are selected so that the distortion due to the approximation of the plane over the region is minimized. The planar model and the piecewise constant model are competing in the merging process, where the two regions to be merged are those ensuring the optimal slope in the rate-distortion curve
    corecore