1,424 research outputs found

    JPEG steganography: A performance evaluation of quantization tables

    Get PDF
    The two most important aspects of any image based steganographic system are the imperceptibility and the capacity of the stego image. This paper evaluates the performance and efficiency of using optimized quantization tables instead of default JPEG tables within JPEG steganography. We found that using optimized tables significantly improves the quality of stego-images. Moreover, we used this optimization strategy to generate a 16x16 quantization table to be used instead of that suggested. The quality of stego-images was greatly improved when these optimized tables were used. This led us to suggest a new hybrid steganographic method in order to increase the embedding capacity. This new method is based on both and Jpeg-Jsteg methods. In this method, for each 16x16 quantized DCT block, the least two significant bits (2-LSBs) of each middle frequency coefficient are modified to embed two secret bits. Additionally, the Jpeg-Jsteg embedding technique is used for the low frequency DCT coefficients without modifying the DC coefficient. Our experimental results show that the proposed approach can provide a higher information-hiding capacity than the other methods tested. Furthermore, the quality of the produced stego-images is better than that of other methods which use the default tables

    Learning Convolutional Networks for Content-weighted Image Compression

    Full text link
    Lossy image compression is generally formulated as a joint rate-distortion optimization to learn encoder, quantizer, and decoder. However, the quantizer is non-differentiable, and discrete entropy estimation usually is required for rate control. These make it very challenging to develop a convolutional network (CNN)-based image compression system. In this paper, motivated by that the local information content is spatially variant in an image, we suggest that the bit rate of the different parts of the image should be adapted to local content. And the content aware bit rate is allocated under the guidance of a content-weighted importance map. Thus, the sum of the importance map can serve as a continuous alternative of discrete entropy estimation to control compression rate. And binarizer is adopted to quantize the output of encoder due to the binarization scheme is also directly defined by the importance map. Furthermore, a proxy function is introduced for binary operation in backward propagation to make it differentiable. Therefore, the encoder, decoder, binarizer and importance map can be jointly optimized in an end-to-end manner by using a subset of the ImageNet database. In low bit rate image compression, experiments show that our system significantly outperforms JPEG and JPEG 2000 by structural similarity (SSIM) index, and can produce the much better visual result with sharp edges, rich textures, and fewer artifacts

    Distributed Representation of Geometrically Correlated Images with Compressed Linear Measurements

    Get PDF
    This paper addresses the problem of distributed coding of images whose correlation is driven by the motion of objects or positioning of the vision sensors. It concentrates on the problem where images are encoded with compressed linear measurements. We propose a geometry-based correlation model in order to describe the common information in pairs of images. We assume that the constitutive components of natural images can be captured by visual features that undergo local transformations (e.g., translation) in different images. We first identify prominent visual features by computing a sparse approximation of a reference image with a dictionary of geometric basis functions. We then pose a regularized optimization problem to estimate the corresponding features in correlated images given by quantized linear measurements. The estimated features have to comply with the compressed information and to represent consistent transformation between images. The correlation model is given by the relative geometric transformations between corresponding features. We then propose an efficient joint decoding algorithm that estimates the compressed images such that they stay consistent with both the quantized measurements and the correlation model. Experimental results show that the proposed algorithm effectively estimates the correlation between images in multi-view datasets. In addition, the proposed algorithm provides effective decoding performance that compares advantageously to independent coding solutions as well as state-of-the-art distributed coding schemes based on disparity learning
    • …
    corecore