277 research outputs found
Locally adaptive vector quantization: Data compression with feature preservation
A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process
Real-time video compression using DVQ and suffix trees
Video processing is a wide and varied subject area. Video compression is an important but difficult problem in video processing. Several methods and standards exist which address this problem with varying degrees of success depending on the performance measures adopted. The present research work focuses on the real-time aspect of video processing.;In particular we propose a real-time video compression algorithm based on the concept of differential vector quantization and the suffix tree. Differential vector quantization is a relatively new area that focuses on efficient compression of data. The present work integrates the compression provided by Differential vector Quantization and the speed achieved by using the suffix tree data structure to develop a new real-time video compression scheme.;Traditionally Suffix trees are used for string searching. In the present work, we exploit the unique structure of the suffix tree to represent image data on a tree as a DVQ dictionary. To support the special characteristics of natural images and video, the traditional suffix tree is extended to handle k-errors in the matching. The result is an orders of magnitude speedup in the matching process, making it possible to compress the video in real-time, without any special hardware.;Experimental results show the performance of the proposed methodology
Real-time demonstration hardware for enhanced DPCM video compression algorithm
The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home)
Quadtree partitioning scheme of color image based
Image segmentation is an essential complementary process in digital image processing and computer vision, but mostly utilizes simple segmentation techniques, such as fixed partitioning scheme and global thresholding techniques due to their simplicity and popularity, in spite of their inefficiency. This paper introduces a new split-merge segmentation process for a quadtree scheme of colour images, based on exploiting the spatial and spectral information embedded within the bands and between bands, respectively. The results show that this technique is efficient in terms of quality of segmentation and time, which can be used in standard techniques as alternative to a fixed partitioning scheme
Techniques for lossless image compression
Popular lossless image compression techniques used today belong to the Lempel-Ziv family of encoders. These techniques are generic in nature and do not take full advantage of the two-dimensional correlation of digital image data. They process a one-dimensional stream of data replacing repetitions with smaller codes. Techniques for Lossless Image Compression introduces a new model for lossless image compression that consists of two stages: transformation and encoDing Transformation takes advantage of the correlative properties of the data, modifying it in order to maximize the use of encoding techniques. Encoding can be described as replacing data symbols that occur frequently or in repeated groups with codes that are represented in a smaller number of bits. Techniques presented in this thesis include descriptions of Lempel-Ziv encoders in use today as well as several new techniques involving the model of transformation and encoding mentioned previously. Example compression ratios achieved by each technique when applied to a sample set of gray-scale cardiac images are provided for compariSo
Vector quantization
During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts
Design of a digital voice data compression technique for orbiter voice channels
Candidate techniques were investigated for digital voice compression to a transmission rate of 8 kbps. Good voice quality, speaker recognition, and robustness in the presence of error bursts were considered. The technique of delayed-decision adaptive predictive coding is described and compared with conventional adaptive predictive coding. Results include a set of experimental simulations recorded on analog tape. The two FM broadcast segments produced show the delayed-decision technique to be virtually undegraded or minimally degraded at .001 and .01 Viterbi decoder bit error rates. Preliminary estimates of the hardware complexity of this technique indicate potential for implementation in space shuttle orbiters
High Order Entropy-Constrained Residual VQ for Lossless Compression of Images
High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance
Recommended from our members
Novel entropy coding and its application of the compression of 3D image and video signals
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe broadcast industry is moving future Digital Television towards Super high resolution TV (4k or 8k) and/or 3D TV. This ultimately will increase the demand on data rate and subsequently the demand for highly efficient codecs. One of the techniques that researchers found it one of the promising technologies in the industry in the next few years is 3D Integral Image and Video due to its simplicity and mimics the reality, independently on viewer aid, one of the challenges of the 3D Integral technology is to improve the compression algorithms to adequate the high resolution and exploit the advantages of the characteristics of this technology. The research scope of this thesis includes designing a novel coding for the 3D Integral image and video compression. Firstly to address the compression of 3D Integral imaging the research proposes novel entropy coding which will be implemented first on 2D traditional images content in order to compare it with the other traditional common standards then will be applied on 3D Integra image and video. This approach seeks to achieve high performance represented by high image quality and low bit rate in association with low computational complexity. Secondly, new algorithm will be proposed in an attempt to improve and develop the transform techniques performance, initially by using a new adaptive 3D-DCT algorithm then by proposing a new hybrid 3D DWT-DCT algorithm via exploiting the advantages of each technique and get rid of the artifact that each technique of them suffers from. Finally, the proposed entropy coding will be further implemented to the 3D integral video in association with another proposed algorithm that based on calculating the motion vector on the average viewpoint for each frame. This approach seeks to minimize the complexity and reduce the speed without affecting the Human Visual System (HVS) performance. Number of block matching techniques will be used to investigate the best block matching technique that is adequate for the new proposed 3D integral video algorithm
- …