736 research outputs found

    Perceptual Copyright Protection Using Multiresolution Wavelet-Based Watermarking And Fuzzy Logic

    Full text link
    In this paper, an efficiently DWT-based watermarking technique is proposed to embed signatures in images to attest the owner identification and discourage the unauthorized copying. This paper deals with a fuzzy inference filter to choose the larger entropy of coefficients to embed watermarks. Unlike most previous watermarking frameworks which embedded watermarks in the larger coefficients of inner coarser subbands, the proposed technique is based on utilizing a context model and fuzzy inference filter by embedding watermarks in the larger-entropy coefficients of coarser DWT subbands. The proposed approaches allow us to embed adaptive casting degree of watermarks for transparency and robustness to the general image-processing attacks such as smoothing, sharpening, and JPEG compression. The approach has no need the original host image to extract watermarks. Our schemes have been shown to provide very good results in both image transparency and robustness.Comment: 13 pages, 7 figure

    Flat zones filtering, connected operators, and filters by reconstruction

    Get PDF
    This correspondence deals with the notion of connected operators. Starting from the definition for operator acting on sets, it is shown how to extend it to operators acting on function. Typically, a connected operator acting on a function is a transformation that enlarges the partition of the space created by the flat zones of the functions. It is shown that from any connected operator acting on sets, one can construct a connected operator for functions (however, it is not the unique way of generating connected operators for functions). Moreover, the concept of pyramid is introduced in a formal way. It is shown that, if a pyramid is based on connected operators, the flat zones of the functions increase with the level of the pyramid. In other words, the flat zones are nested. Filters by reconstruction are defined and their main properties are presented. Finally, some examples of application of connected operators and use of flat zones are described.Peer ReviewedPostprint (published version

    Vector quantization

    Get PDF
    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts

    Weighted universal image compression

    Get PDF
    We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB

    Improving Embedded Image Coding Using Zero Block - Quad Tree

    Get PDF
    The traditional multi-bitstream approach to the heterogeneity issue is very constrained and inefficient under multi bit rate applications. The multi bitstream coding techniques allow partial decoding at a various resolution and quality levels. Several scalable coding algorithms have been proposed in the international standards over the past decade, but these former methods can only accommodate relatively limited decoding properties. To achieve efficient coding during image coding the multi resolution compression technique is been used. To exploit the multi resolution effect of image, wavelet transformations are devolved. Wavelet transformation decompose the image coefficients into their fundamental resolution, but the transformed coefficients are observed to be non-integer values resulting in variable bit stream. This transformation result in constraint bit rate application with slower operation. To overcome stated limitation, hierarchical tree based coding were implemented which exploit the relation between the wavelet scale levels and generate the code stream for transmission

    Finite state lattice vector quantization for wavelet-based image coding

    Get PDF
    IEEE International Symposium on Circuits and Systems, Hong Kong, China, 9-12 June 1997It is well known that there exists strong energy correlation between various subbands of a real-world image. A new powerful technique of Finite State Vector Quantization (FSVQ) has been introduced to fully exploit the self-similarity of the image in wavelet domain across different scales. Lattices in RN have considerable structure, and hence, Lattice VQ offers the promise of design simplicity and reduced complexity encoding. The combination of FSVQ and LVQ gives rise to the so-called FSLVQ, which is proved to be successful in exploiting the energy correlation across scales and simple enough in implementation.published_or_final_versio

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques
    • …
    corecore