8,245 research outputs found
Multiplication-free vector quantization using the L l distortion measure and its variants
Journal ArticleVector quantization is a very powerful technique for data compression and consequently, it has attracted a lot of attention lately. One major drawback associated with this approach is its extreme computational complexity. This paper first considers vector quantization that uses the L1 distortion measure for its implementation. The L1 distortion measure is very attractive from an implementational point of view, since no multiplication is required for computing the distortion measure. Unfortunately, the traditional Linde-Buzo-Gray (LBG) method for designing the codebook for the L1 distortion measure can become extremely time-consuming, since it involves several computations of medians of very large arrays. We propose a gradient-based approach for codebook design that does not require any multiplications or median computations. The codebook design algorithm is then extended to a distortion measure that has piecewise-linear characteristics. Once again, by appropriate selection of the parameters of the distortion measure, the encoding as well as the codebook design can be implemented with zero multiplications. Finally, we apply our techniques in predictive vector quantization of images and demonstrate the viability of multiplication-free predictive vector quantization of image data
Performance Evaluation of Hybrid Coding of Images Using Wavelet Transform and Predictive Coding
Image compression techniques are necessary for the storage of huge amounts of digital images using reasonable amounts of space, and for their transmission with limited bandwidth. Several techniques such as predictive coding, transform coding, subband coding, wavelet coding, and vector quantization have been used in image coding. While each technique has some advantages, most practical systems use hybrid techniques which incorporate more than one scheme. They combine the advantages of the individual schemes and enhance the coding effectiveness. This paper proposes and evaluates a hybrid coding scheme for images using wavelet transforms and predictive coding. The performance evaluation is done using a variety of different parameters such as kinds of wavelets, decomposition levels, types of quantizers, predictor coefficients, and quantization levels. The results of evaluation are presented
Predictive vector quantization of images using a constrained two-dimensional autoregressive predictor
Journal ArticleA novel approach to image compression using vector quantization of linear (one-step) prediction errors is presented in this paper. In order to minimize the image reconstruction error, we choose the optimum predictor coefficients (in a least-squares sense) that satisfy the additional constraint that the energy of the impulse response function of the inverse reconstruction filter is bounded by a small constant C. Further, the code vectors are selected such that the reconstruction error is minimized, rather than the quantization noise for the prediction error sequences. Examples demonstrating the excellent quality of the reconstructed images using our approach at bit rates below 0.65 bit/pixel are presented
Spatially Directional Predictive Coding for Block-based Compressive Sensing of Natural Images
A novel coding strategy for block-based compressive sens-ing named spatially
directional predictive coding (SDPC) is proposed, which efficiently utilizes
the intrinsic spatial cor-relation of natural images. At the encoder, for each
block of compressive sensing (CS) measurements, the optimal pre-diction is
selected from a set of prediction candidates that are generated by four
designed directional predictive modes. Then, the resulting residual is
processed by scalar quantiza-tion (SQ). At the decoder, the same prediction is
added onto the de-quantized residuals to produce the quantized CS measurements,
which is exploited for CS reconstruction. Experimental results substantiate
significant improvements achieved by SDPC-plus-SQ in rate distortion
performance as compared with SQ alone and DPCM-plus-SQ.Comment: 5 pages, 3 tables, 3 figures, published at IEEE International
Conference on Image Processing (ICIP) 2013 Code Avaiable:
http://idm.pku.edu.cn/staff/zhangjian/SDPC
Data compression techniques applied to high resolution high frame rate video technology
An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended
A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images
Predictive coding is attractive for compression onboard of spacecrafts thanks
to its low computational complexity, modest memory requirements and the ability
to accurately control quality on a pixel-by-pixel basis. Traditionally,
predictive compression focused on the lossless and near-lossless modes of
operation where the maximum error can be bounded but the rate of the compressed
image is variable. Rate control is considered a challenging problem for
predictive encoders due to the dependencies between quantization and prediction
in the feedback loop, and the lack of a signal representation that packs the
signal's energy into few coefficients. In this paper, we show that it is
possible to design a rate control scheme intended for onboard implementation.
In particular, we propose a general framework to select quantizers in each
spatial and spectral region of an image so as to achieve the desired target
rate while minimizing distortion. The rate control algorithm allows to achieve
lossy, near-lossless compression, and any in-between type of compression, e.g.,
lossy compression with a near-lossless constraint. While this framework is
independent of the specific predictor used, in order to show its performance,
in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless
compression standard, obtaining an extension that allows to perform lossless,
near-lossless and lossy compression in a single package. We show that the rate
controller has excellent performance in terms of accuracy in the output rate,
rate-distortion characteristics and is extremely competitive with respect to
state-of-the-art transform coding
Conditional Entropy-Constrained Residual VQ with Application to Image Coding
This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction
Map online system using internet-based image catalogue
Digital maps carry along its geodata information such as coordinate that is important in one particular topographic and thematic map. These geodatas are meaningful especially in military field. Since the maps carry along this information, its makes the size of the images is too big. The bigger size, the bigger storage is required to allocate the image file. It also can cause longer loading time. These conditions make it did not suitable to be applied in image catalogue approach via internet environment. With compression techniques, the image size can be reduced and the quality of the image is still guaranteed without much changes. This report is paying attention to one of the image compression technique using wavelet technology. Wavelet technology is much batter than any other image compression technique nowadays. As a result, the compressed images applied to a system called Map Online that used Internet-based Image Catalogue approach. This system allowed user to buy map online. User also can download the maps that had been bought besides using the searching the map. Map searching is based on several meaningful keywords. As a result, this system is expected to be used by Jabatan Ukur dan Pemetaan Malaysia (JUPEM) in order to make the organization vision is implemented
- …