84 research outputs found

    Perfectly Secure Steganography: Capacity, Error Exponents, and Code Constructions

    Full text link
    An analysis of steganographic systems subject to the following perfect undetectability condition is presented in this paper. Following embedding of the message into the covertext, the resulting stegotext is required to have exactly the same probability distribution as the covertext. Then no statistical test can reliably detect the presence of the hidden message. We refer to such steganographic schemes as perfectly secure. A few such schemes have been proposed in recent literature, but they have vanishing rate. We prove that communication performance can potentially be vastly improved; specifically, our basic setup assumes independently and identically distributed (i.i.d.) covertext, and we construct perfectly secure steganographic codes from public watermarking codes using binning methods and randomized permutations of the code. The permutation is a secret key shared between encoder and decoder. We derive (positive) capacity and random-coding exponents for perfectly-secure steganographic systems. The error exponents provide estimates of the code length required to achieve a target low error probability. We address the potential loss in communication performance due to the perfect-security requirement. This loss is the same as the loss obtained under a weaker order-1 steganographic requirement that would just require matching of first-order marginals of the covertext and stegotext distributions. Furthermore, no loss occurs if the covertext distribution is uniform and the distortion metric is cyclically symmetric; steganographic capacity is then achieved by randomized linear codes. Our framework may also be useful for developing computationally secure steganographic systems that have near-optimal communication performance.Comment: To appear in IEEE Trans. on Information Theory, June 2008; ignore Version 2 as the file was corrupte

    Perfectly Secure Steganography: Capacity, Error Exponents, and Code Constructions

    Full text link
    An analysis of steganographic systems subject to the following perfect undetectability condition is presented in this paper. Following embedding of the message into the covertext, the resulting stegotext is required to have exactly the same probability distribution as the covertext. Then no statistical test can reliably detect the presence of the hidden message. We refer to such steganographic schemes as perfectly secure. A few such schemes have been proposed in recent literature, but they have vanishing rate. We prove that communication performance can potentially be vastly improved; specifically, our basic setup assumes independently and identically distributed (i.i.d.) covertext, and we construct perfectly secure steganographic codes from public watermarking codes using binning methods and randomized permutations of the code. The permutation is a secret key shared between encoder and decoder. We derive (positive) capacity and random-coding exponents for perfectly-secure steganographic systems. The error exponents provide estimates of the code length required to achieve a target low error probability. We address the potential loss in communication performance due to the perfect-security requirement. This loss is the same as the loss obtained under a weaker order-1 steganographic requirement that would just require matching of first-order marginals of the covertext and stegotext distributions. Furthermore, no loss occurs if the covertext distribution is uniform and the distortion metric is cyclically symmetric; steganographic capacity is then achieved by randomized linear codes. Our framework may also be useful for developing computationally secure steganographic systems that have near-optimal communication performance.Comment: To appear in IEEE Trans. on Information Theory, June 2008; ignore Version 2 as the file was corrupte

    A study on the false positive rate of Stegdetect

    Get PDF
    In this paper we analyse Stegdetect, one of the well-known image steganalysis tools, to study its false positive rate. In doing so, we process more than 40,000 images randomly downloaded from the Internet using Google images, together with 25,000 images from the ASIRRA (Animal Species Image Recognition for Restricting Access) public corpus. The aim of this study is to help digital forensic analysts, aiming to study a large number of image files during an investigation, to better understand the capabilities and the limitations of steganalysis tools like Stegdetect. The results obtained show that the rate of false positives generated by Stegdetect depends highly on the chosen sensitivity value, and it is generally quite high. This should support the forensic expert to have better interpretation in their results, and taking the false positive rates into consideration. Additionally, we have provided a detailed statistical analysis for the obtained results to study the difference in detection between selected groups, close groups and different groups of images. This method can be applied to any steganalysis tool, which gives the analyst a better understanding of the detection results, especially when he has no prior information about the false positive rate of the tool

    On the Removal of Steganographic Content from Images

    Get PDF
    Steganography is primarily used for the covert transmission of information even though the purpose can be legitimate or malicious. The primary purpose of this work is to build a firewall which will thwart this transmission. This will be achieved by radiometric and geometric operations. These operations will degrade the quality of cover image. However these can be restored to some extent by a deconvolution operation. The finally deconvolved image is subjected to steganalysis to verify the absence of stego content. Experimental results showed that PSNR and SSIM values are between 35 dB - 45 dB and 0.96, respectively which are above the acceptable range. Our method can suppress the stego content to large extent irrespective of embedding algorithm in spatial and transform domain. We verified by using RS steganalysis, difference image histogram and chi-square attack, that 95 per cent of the stego content embedded in the spatial domain was removed by our showering techniques. We also verified that 100 per cent of the stego content was removed in the transform domain with PSNR 30 dB - 45 dB and SSIM between 0.67-0.99. Percentage of stego removed in both domains was measured by using bit error rate and first order Markov feature
    • …
    corecore