219 research outputs found

    Image Compression and Watermarking scheme using Scalar Quantization

    Full text link
    This paper presents a new compression technique and image watermarking algorithm based on Contourlet Transform (CT). For image compression, an energy based quantization is used. Scalar quantization is explored for image watermarking. Double filter bank structure is used in CT. The Laplacian Pyramid (LP) is used to capture the point discontinuities, and then followed by a Directional Filter Bank (DFB) to link point discontinuities. The coefficients of down sampled low pass version of LP decomposed image are re-ordered in a pre-determined manner and prediction algorithm is used to reduce entropy (bits/pixel). In addition, the coefficients of CT are quantized based on the energy in the particular band. The superiority of proposed algorithm to JPEG is observed in terms of reduced blocking artifacts. The results are also compared with wavelet transform (WT). Superiority of CT to WT is observed when the image contains more contours. The watermark image is embedded in the low pass image of contourlet decomposition. The watermark can be extracted with minimum error. In terms of PSNR, the visual quality of the watermarked image is exceptional. The proposed algorithm is robust to many image attacks and suitable for copyright protection applications.Comment: 11 Pages, IJNGN Journal 201

    Multiple Content Adaptive Intelligent Watermarking Schemes for the Protection of Blocks of a Document Image

    Get PDF
    Most of the documents contain different types of information such as white space, static information and dynamic information or mix of static and dynamic information. In this paper, multiple watermarking schemes are proposed for protection of the information content. The proposed approach comprises of three phases. In Phase-1, the edges of the source document image are extracted and the edge image is decomposed into blocks of uniform size. In Phase-2, GLCM features like energy, homogeneity, contrast and correlation are extracted from each block and the blocks are classified as no-information, static, dynamic and mix of static and dynamic information content blocks. The adjacent blocks of same type are merged together into a single block. Each block is watermarked in Phase-3. The type and amount of watermarking applied is decided intelligently and adaptively based on the classification of the blocks which results in improving embedding capacity and reducing time complexity incurred during watermarking. Experiments are conducted exhaustively on all the images in the corpus. The experimental evaluations exhibit better classification of segments based on information content in the block. The proposed technique also outperforms the existing watermarking schemes on document images in terms of robustness, accuracy of tamper detection and recovery

    Graph-Cut Rate Distortion Algorithm for Contourlet-Based Image Compression

    Full text link
    The geometric features of images, such as edges, are difficult to represent. When a redundant transform is used for their extraction, the compression challenge is even more difficult. In this paper we present a new rate-distortion optimization al-gorithm based on graph theory that can encode efficiently the coefficients of a critically sampled, non-orthogonal or even redundant transform, like the contourlet decomposition. The basic idea is to construct a specialized graph such that its min-imum cut minimizes the energy functional. We propose to ap-ply this technique for rate-distortion Lagrangian optimization in subband image coding. The method yields good compres-sion results compared to the state-of-art JPEG2000 codec, as well as a general improvement in visual quality. Index Terms — subband image coding, rate- distortion allocation 1

    Multiresolution Methods in Face Recognition

    Get PDF
    • …
    corecore