26,255 research outputs found

    Fast K-dimensional tree-structured vector quantization encoding method for image compression

    Get PDF
    This paper presents a fast K-dimensional tree-based search method to speed up the encoding process for vector quantization. The method is especially designed for very large codebooks and is based on a local search rather than on a global search including the whole feature space. The relations between the proposed method and several existing fast algorithms are discussed. Simulation results demonstrate that with little preprocessing and memory cost, the encoding time of the new algorithm has been reduced significantly while encoding quality remains the same with respect to other existing fast algorithms

    An Efficient Coding Method for Teleconferencing Video and Confocal Microscopic Image Sequences

    Get PDF
    In this paper we propose a three-dimensional vector quantization based video coding scheme. The algorithm uses a 3D vector quantization pyramidal code book based model with adaptive code book pyramidal codebook for compression. The pyramidal code book based model helps in getting high compression in case of modest motion. The adaptive vector quantization algorithm is used to train the code book for optimal performance with time. Some of the distinguished features of our algorithm are its excellent performance due to its adaptive behavior to the video composition and excellent compression due to codebook approach. We also propose an efficient codebook based post processing technique which enables the vector quantizer to possess higher correlation preservation property. Based on the special pattern of the codebook imposed by post-processing technique, a window based fast search (WBFS) algorithm is proposed. The WBFS algorithm not only accelerates the vector quantization processing, but also results in better rate-distortion performance. The proposed approach can be used for both teleconferencing videos and to compress images obtained from confocal laser scanning microscopy (CLSM). The results show that the proposed method gave higher subjective and objective image quality of reconstructed images at a better compression ratio and presented more acceptable results when applying image processing filters such as edge detection on reconstructed images. The experimental results demonstrate that the proposed method outperforms the teleconferencing compression standards H.261 and LBG based vector quantization technique

    An Efficient Index for Visual Search in Appearance-based SLAM

    Full text link
    Vector-quantization can be a computationally expensive step in visual bag-of-words (BoW) search when the vocabulary is large. A BoW-based appearance SLAM needs to tackle this problem for an efficient real-time operation. We propose an effective method to speed up the vector-quantization process in BoW-based visual SLAM. We employ a graph-based nearest neighbor search (GNNS) algorithm to this aim, and experimentally show that it can outperform the state-of-the-art. The graph-based search structure used in GNNS can efficiently be integrated into the BoW model and the SLAM framework. The graph-based index, which is a k-NN graph, is built over the vocabulary words and can be extracted from the BoW's vocabulary construction procedure, by adding one iteration to the k-means clustering, which adds small extra cost. Moreover, exploiting the fact that images acquired for appearance-based SLAM are sequential, GNNS search can be initiated judiciously which helps increase the speedup of the quantization process considerably

    Scalable Image Retrieval by Sparse Product Quantization

    Get PDF
    Fast Approximate Nearest Neighbor (ANN) search technique for high-dimensional feature indexing and retrieval is the crux of large-scale image retrieval. A recent promising technique is Product Quantization, which attempts to index high-dimensional image features by decomposing the feature space into a Cartesian product of low dimensional subspaces and quantizing each of them separately. Despite the promising results reported, their quantization approach follows the typical hard assignment of traditional quantization methods, which may result in large quantization errors and thus inferior search performance. Unlike the existing approaches, in this paper, we propose a novel approach called Sparse Product Quantization (SPQ) to encoding the high-dimensional feature vectors into sparse representation. We optimize the sparse representations of the feature vectors by minimizing their quantization errors, making the resulting representation is essentially close to the original data in practice. Experiments show that the proposed SPQ technique is not only able to compress data, but also an effective encoding technique. We obtain state-of-the-art results for ANN search on four public image datasets and the promising results of content-based image retrieval further validate the efficacy of our proposed method.Comment: 12 page

    A high-speed codebook design algorithm for ECVQ using angular constraint with search space partitioning

    Get PDF
    金沢大学大学院自然科学研究科情報システム金沢大学工学部In this paper, we propose a fast codebook generation algorithm for entropy-constrained vector quantization (ECVQ). The algorithm uses the angular constraint and employs a suitable hyperplane to partition the codebook and image data in order to reduce the search area and accelerate the search process in the codebook design. This algorithm allows significant acceleration in codebook design process. Experimental results are presented on image block data. These results show that our new algorithm performs better than the previously known methods

    Hashing for Similarity Search: A Survey

    Full text link
    Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space
    corecore