1,012 research outputs found

    State of the art in 2D content representation and compression

    Get PDF
    Livrable D1.3 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D3.1 du projet

    DESIGN OF NEURO-WAVELET BASED VECTOR QUANTIZER FOR IMAGE COMPRESSION

    Get PDF
    This paper presents a novel approach to design a vector quantizer for image compression. Compression of image data using Vector Quantization (VQ) will compare Training Vectors with Codebook that has been designed. The result is an index of position with minimum distortion. Moreover it provides a means of decomposition of the signal in an approach which takes the improvement of inter and intra band correlation as more lithe partition for higher dimension vector spaces. Thus, the image is compressed without any loss of information. It also provides a comparative study in the view of simplicity, storage space, robustness and transfer time of various vector quantization methods. In addition the proposed paper also presents a survey on different methods of vector quantization for image compression and application of SOFM

    A HIGH SPEED VLSI ARCHITECTURE FOR DIGITAL SPEECH WATERMARKING WITH COMPRESSION

    Get PDF
    The need to provide a copy right protection on digital watermarking to multimedia data like speech, image or video is rapidly increasing with an intensification in the application in these areas. Digital watermarking has received a lot of attention in the past few years. A hardware system based solely on DSP processors are fast but may require more area, cost or power if the target application requires a large amount of parallel processing. An FPGA co-processor can provide as many as 550 parallel multiply and accumulate operations on a single device, but FPGAs excel at processing large amounts of data in parallel, as they are not optimized as processors for tasks such as periodic coefficient updates, decision- making control tasks. Combination of both the FPGA and DSP processor delivers an attractive solution for a wide range of applications. A hardware implementation of digital speech watermarking combined with speech compression, encryption on heterogeneous platform is made in this paper. It is observed that the proposed architecture is able to attain high speed while utilizing optimal resources in terms of area

    Image Watermarking in Higher-Order Gradient Domain

    Get PDF

    Gossip Algorithms for Distributed Signal Processing

    Full text link
    Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This article presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.Comment: Submitted to Proceedings of the IEEE, 29 page

    Rate-Distortion Optimized Vector SPIHT for Wavelet Image Coding

    Get PDF
    In this paper, a novel image coding scheme using rate-distortion optimized vector quantization of wavelet coefficients is presented. A vector set partitioning algorithm is used to locate significant wavelet vectors which are classified into a number of classes based on their energies, thus reducing the complexity of the vector quantization. The set partitioning bits are reused to indicate the vector classification indices to save the bits for coding of the classification overhead. A set of codebooks with different sizes is designed for each class of vectors, and a Lagrangian optimization algorithm is employed to select an optimal codebook for each vector. The proposed coding scheme is capable of trading off between the number of bits used to code each vector and the corresponding distortion. Experimental results show that our proposed method outperforms other zerotree-structured embedded wavelet coding schemes such as SPIHT and SFQ, and is competitive with JPEG2000

    Image compression techniques using vector quantization

    Get PDF

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques
    • …
    corecore