2,579 research outputs found

    An improvement on codebook search for vector quantization

    Get PDF
    Presents a simple but effective algorithm to speed up the codebook search in a vector quantization scheme when a MSE criterion is used. A considerable reduction in the number of operations is achieved. This algorithm was originally designed for image vector quantization in which the samples of the image signal (pixels) are positive, although it can be used with any positive-negative signal with only minor modifications.Peer ReviewedPostprint (published version

    Image coding using entropy-constrained reflected residual vector quantization

    Get PDF
    Residual vector quantization (RVQ) is a structurally constrained vector quantization (VQ) paradigm. RVQ employs multipath search and has higher encoding cost as compared to sequential single-path search. Reflected residual vector quantization (Ref-RVQ), a design with additional symmetry on the codebook, was developed later to a jointly optimized RVQ structure with single-path search. The constrained Ref-RVQ codebook exhibits an increase in distortion. However, it was conjectured that the Ref-RVQ codebook has a lower output entropy than that of the multipath RVQ codebook. Therefore, the Ref-RVQ design was generalized to include noiseless entropy coding. We apply it to image coding. The method is referred to as entropy-constrained Ref-RVQ (EC-Ref-RVQ). Since the RVQ scheme is able to implement very large dimensional vector quantization designs like 16/spl times/16 and 32/spl times/32 VQs, it is found highly successful in extracting linear and non-linear correlation among image pixels. We intend to implement these large dimensional vectors with the EC-Ref-RVQ scheme to realize a computationally less demanding image-RVQ design. Simulation results demonstrate that EC-Ref-RVQ, while maintaining single path search, provides 1 dB improvement in PSNR for image data over the multipath EC-RVQ

    Design and analysis of entropy-constrained reflected residual vector quantization

    Get PDF
    Residual vector quantization (RVQ) is a vector quantization (VQ) paradigm which imposes structural constraints on the encoder in order to reduce the encoding search burden and memory storage requirements of an unconstrained VQ. Jointly optimized RVQ (JORVQ) is an effective design algorithm for minimizing the overall quantization error. Reflected residual vector quantization (RRVQ) is an alternative design algorithm for the RVQ structure with a smaller computation burden. RRVQ works by imposing an additional symmetry constraint on the RVQ codebook design. Savings in computation were accompanied by an increase in distortion. However, an RRVQ codebook, being structured in nature, is expected to provide lower output entropy. Therefore, we generalize RRVQ to include noiseless entropy coding. The method is referred to as entropy-constrained RRVQ (EC-RRVQ). Simulation results show that EC-RRVQ outperforms RRVQ by 4 dB for memoryless Gaussian and Laplacian sources. In addition, for the same synthetic sources, EC-RRVQ provided an improvement over other entropy-constrained designs, such as entropy-constrained JORVQ (EC-JORVQ). The design performed equally well on image data. In comparison with EC-JORVQ, EC-RRVQ is simpler and outperforms the EC-JORVQ

    Design and analysis of entropy-constrained reflected residual vector quantization

    Get PDF
    Residual vector quantization (RVQ) is a vector quantization (VQ) paradigm which imposes structural constraints on the encoder in order to reduce the encoding search burden and memory storage requirements of an unconstrained VQ. Jointly optimized RVQ (JORVQ) is an effective design algorithm for minimizing the overall quantization error. Reflected residual vector quantization (RRVQ) is an alternative design algorithm for the RVQ structure with a smaller computation burden. RRVQ works by imposing an additional symmetry constraint on the RVQ codebook design. Savings in computation were accompanied by an increase in distortion. However, an RRVQ codebook, being structured in nature, is expected to provide lower output entropy. Therefore, we generalize RRVQ to include noiseless entropy coding. The method is referred to as entropy-constrained RRVQ (EC-RRVQ). Simulation results show that EC-RRVQ outperforms RRVQ by 4 dB for memoryless Gaussian and Laplacian sources. In addition, for the same synthetic sources, EC-RRVQ provided an improvement over other entropy-constrained designs, such as entropy-constrained JORVQ (EC-JORVQ). The design performed equally well on image data. In comparison with EC-JORVQ, EC-RRVQ is simpler and outperforms the EC-JORVQ

    Application of an Annular/Sphere Search Algorithm for Speaker Recognition

    Get PDF
    In this work, an alternative search algorithm for vector quantization codebook is applied as a way to improve the performance of an automatic speaker recognition system. The search algorithm is based on geometrical properties of the vector space, defining annular and spherical regions instead of a full search method. The speaker recognition system is intended to identify a suspect, between a small group of persons, using low quality recordings, working as a text independent automatic speaker recognition system. Because the rate of recognition required in forensic applications is extremely important, the use of good discrimination algorithms can reduce the risk of bad decisions. The performance of the system under such a conditions is reported. Besides the few speaker samples used for training, a high recognition rate was obtained, so it was found an improvement of the recognition rate over the full search method

    A mean-removed variation of weighted universal vector quantization for image coding

    Get PDF
    Weighted universal vector quantization uses traditional codeword design techniques to design locally optimal multi-codebook systems. Application of this technique to a sequence of medical images produces a 10.3 dB improvement over standard full search vector quantization followed by entropy coding at the cost of increased complexity. In this proposed variation each codebook in the system is given a mean or 'prediction' value which is subtracted from all supervectors that map to the given codebook. The chosen codebook's codewords are then used to encode the resulting residuals. Application of the mean-removed system to the medical data set achieves up to 0.5 dB improvement at no rate expense

    Generalized residual vector quantization for large scale data

    Full text link
    Vector quantization is an essential tool for tasks involving large scale data, for example, large scale similarity search, which is crucial for content-based information retrieval and analysis. In this paper, we propose a novel vector quantization framework that iteratively minimizes quantization error. First, we provide a detailed review on a relevant vector quantization method named \textit{residual vector quantization} (RVQ). Next, we propose \textit{generalized residual vector quantization} (GRVQ) to further improve over RVQ. Many vector quantization methods can be viewed as the special cases of our proposed framework. We evaluate GRVQ on several large scale benchmark datasets for large scale search, classification and object retrieval. We compared GRVQ with existing methods in detail. Extensive experiments demonstrate our GRVQ framework substantially outperforms existing methods in term of quantization accuracy and computation efficiency.Comment: published on International Conference on Multimedia and Expo 201

    An iterative joint codebook and classifier improvement algorithm for finite-state vector quantization

    Get PDF
    A finite-state vector quantizer (FSVQ) is a multicodebook system in, which the current state (or codebook) is chosen as a function of the previously quantized vectors. The authors introduce a novel iterative algorithm for joint codebook and next state function design of full search finite-state vector quantizers. They consider the fixed-rate case, for which no optimal design strategy is known. A locally optimal set of codebooks is designed for the training data and then predecessors to the training vectors associated with each codebook are appropriately labelled and used in designing the classifier. The algorithm iterates between next state function and state codebook design until it arrives at a suitable solution. The proposed design consistently yields better performance than the traditional FSVQ design method (under identical state space and codebook constraints)
    corecore