149 research outputs found

    LBGS: a smart approach for very large data sets vector quantization

    Get PDF
    Abstract In this paper, LBGS, a new parallel/distributed technique for Vector Quantization is presented. It derives from the well known LBG algorithm and has been designed for very complex problems where both large data sets and large codebooks are involved. Several heuristics have been introduced to make it suitable for implementation on parallel/distributed hardware. These lead to a slight deterioration of the quantization error with respect to the serial version but a large improvement in computing efficiency

    An adaptive vector quantization scheme

    Get PDF
    Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations

    Automatic facial recognition based on facial feature analysis

    Get PDF

    Dimension reduction of image and audio space

    Full text link
    The reduction of data necessary for storage or transmission is a desirable goal in the digital video and audio domain. Compression schemes strive to reduce the amount of storage space or bandwidth necessary to keep or move the data. Data reduction can be accomplished so that visually or audibly unnecessary data is removed or recoded thus aiding the compression phase of the data processing. The characterization and identification of data that can be successfully removed or reduced is the purpose of this work. New philosophy, theory and methods for data processing are presented towards the goal of data reduction. The philosophy and theory developed in this work establish a foundation for high speed data reduction suitable for multi-media applications. The developed methods encompass motion detection and edge detection as features of the systems. The philosophy of energy flow analysis in video processing enables the consideration of noise in digital video data. Research into noise versus motion leads to an efficient and successful method of identifying motion in a sequence. The research of the underlying statistical properties of vector quantization provides an insight into the performance characteristics of vector quantization and leads to successful improvements in application. The underlying statistical properties of the vector quantization process are analyzed and three theorems are developed and proved. The theorems establish the statistical distributions and probability densities of various metrics of the vector quantization process. From these properties, an intelligent and efficient algorithm design is developed and tested. The performance improvements in both time and quality are established through algorithm analysis and empirical testing. The empirical results are presented

    New Method to Reduce the Size of Codebook in Vector Quantization of Images

    Get PDF
    ABSTRACT The vector quantization method for image compression inherently requires the generation of a codebook which has to be made available for both the encoding and decoding processes. That necessitates the attachment of this codebook when a compressed image is stored or sent. For the purpose of improving the overall efficiency of the vector quantization method, the need arose for improving a means for the reduction of the codebook size. In this paper, a new method for vector quantization is presented by which the suggested algorithm reduces the size of the codebook generated in vector quantization. This reduction is performed by sorting the codewords of the codebook then the differences between adjacent codewords are computed. Huffman coding (lossless compression) is performed on the differences in order to reduce the size of the codebbook. Sahar K. Ahmed 5

    Low bit rate speech transmission: classified vector excitation coding

    Get PDF
    Vector excitation coding (VXC) is a speech digitisation technique growing in popularity. Problems associated with VXC systems are high computational complexity and poor reconstruction of plosives. The Pairwise Nearest Neighbour (PNN) clustering algorithm is proposed as an efficient method of codebook design. It is demonstrated to preserve plosives better than the Linde-Buzo-Gary (LBG) algorithm [34] and maintain similar quality to LBG for other speech Classification of the residual is then studied. This reduces codebook search complexity and enables a shortcut in computation of the PNN algorithm to be exploited

    Image compression using vector quantization and lossless index coding

    Get PDF

    Image Compression Techniques: A Survey in Lossless and Lossy algorithms

    Get PDF
    The bandwidth of the communication networks has been increased continuously as results of technological advances. However, the introduction of new services and the expansion of the existing ones have resulted in even higher demand for the bandwidth. This explains the many efforts currently being invested in the area of data compression. The primary goal of these works is to develop techniques of coding information sources such as speech, image and video to reduce the number of bits required to represent a source without significantly degrading its quality. With the large increase in the generation of digital image data, there has been a correspondingly large increase in research activity in the field of image compression. The goal is to represent an image in the fewest number of bits without losing the essential information content within. Images carry three main type of information: redundant, irrelevant, and useful. Redundant information is the deterministic part of the information, which can be reproduced without loss from other information contained in the image. Irrelevant information is the part of information that has enormous details, which are beyond the limit of perceptual significance (i.e., psychovisual redundancy). Useful information, on the other hand, is the part of information, which is neither redundant nor irrelevant. Human usually observes decompressed images. Therefore, their fidelities are subject to the capabilities and limitations of the Human Visual System. This paper provides a survey on various image compression techniques, their limitations, compression rates and highlights current research in medical image compression
    corecore