16 research outputs found

    Effect of Cover Quantization on Steganographic Fisher Information

    Full text link

    Steganographer Identification

    Full text link
    Conventional steganalysis detects the presence of steganography within single objects. In the real-world, we may face a complex scenario that one or some of multiple users called actors are guilty of using steganography, which is typically defined as the Steganographer Identification Problem (SIP). One might use the conventional steganalysis algorithms to separate stego objects from cover objects and then identify the guilty actors. However, the guilty actors may be lost due to a number of false alarms. To deal with the SIP, most of the state-of-the-arts use unsupervised learning based approaches. In their solutions, each actor holds multiple digital objects, from which a set of feature vectors can be extracted. The well-defined distances between these feature sets are determined to measure the similarity between the corresponding actors. By applying clustering or outlier detection, the most suspicious actor(s) will be judged as the steganographer(s). Though the SIP needs further study, the existing works have good ability to identify the steganographer(s) when non-adaptive steganographic embedding was applied. In this chapter, we will present foundational concepts and review advanced methodologies in SIP. This chapter is self-contained and intended as a tutorial introducing the SIP in the context of media steganography.Comment: A tutorial with 30 page

    DEVELOPMENT AND IMPLEMENTATION OF HASH FUNCTION FOR GENERATING HASHED MESSAGE

    Full text link

    Limits of Reliable Communication with Low Probability of Detection on AWGN Channels

    Full text link
    We present a square root limit on the amount of information transmitted reliably and with low probability of detection (LPD) over additive white Gaussian noise (AWGN) channels. Specifically, if the transmitter has AWGN channels to an intended receiver and a warden, both with non-zero noise power, we prove that o(n)o(\sqrt{n}) bits can be sent from the transmitter to the receiver in nn channel uses while lower-bounding α+β1ϵ\alpha+\beta\geq1-\epsilon for any ϵ>0\epsilon>0, where α\alpha and β\beta respectively denote the warden's probabilities of a false alarm when the sender is not transmitting and a missed detection when the sender is transmitting. Moreover, in most practical scenarios, a lower bound on the noise power on the channel between the transmitter and the warden is known and O(n)O(\sqrt{n}) bits can be sent in nn LPD channel uses. Conversely, attempting to transmit more than O(n)O(\sqrt{n}) bits either results in detection by the warden with probability one or a non-zero probability of decoding error at the receiver as nn\rightarrow\infty.Comment: Major revision in v2. Context, esp. the relationship to steganography updated. Also, added discussion on secret key length. Results are unchanged from previous version. Minor revision in v3. Major revision in v4, Clarified derivations (adding appendix), also context, esp. relationship to previous work in communication updated. Results are unchanged from previous revision

    Minimizing embedding impact in steganography using trellis-coded quantization

    Full text link

    Dynamic hashing technique for bandwidth reduction in image transmission

    Get PDF
    Hash functions are widely used in secure communication systems by generating the message digests for detection of unauthorized changes in the files. Encrypted hashed message or digital signature is used in many applications like authentication to ensure data integrity. It is almost impossible to ensure authentic messages when sending over large bandwidth in highly accessible network especially on insecure channels. Two issues that required to be addressed are the large size of hashed message and high bandwidth. A collaborative approach between encoded hash message and steganography provides a highly secure hidden data. The aim of the research is to propose a new method for producing a dynamic and smaller encoded hash message with reduced bandwidth. The encoded hash message is embedded into an image as a stego-image to avoid additional file and consequently the bandwidth is reduced. The receiver extracts the encoded hash and dynamic hashed message from the received file at the same time. If decoding encrypted hash by public key and hashed message from the original file matches the received file, it is considered as authentic. In enhancing the robustness of the hashed message, we compressed or encoded it or performed both operations before embedding the hashed data into the image. The proposed algorithm had achieved the lowest dynamic size (1 KB) with no fix length of the original file compared to MD5, SHA-1 and SHA-2 hash algorithms. The robustness of hashed message was tested against the substitution, replacement and collision attacks to check whether or not there is any detection of the same message in the output. The results show that the probability of the existence of the same hashed message in the output is closed to 0% compared to the MD5 and SHA algorithms. Amongst the benefits of this proposed algorithm is computational efficiency, and for messages with the sizes less than 1600 bytes, the hashed file reduced the original file up to 8.51%

    Optimization of medical image steganography using n-decomposition genetic algorithm

    Get PDF
    Protecting patients' confidential information is a critical concern in medical image steganography. The Least Significant Bits (LSB) technique has been widely used for secure communication. However, it is susceptible to imperceptibility and security risks due to the direct manipulation of pixels, and ASCII patterns present limitations. Consequently, sensitive medical information is subject to loss or alteration. Despite attempts to optimize LSB, these issues persist due to (1) the formulation of the optimization suffering from non-valid implicit constraints, causing inflexibility in reaching optimal embedding, (2) lacking convergence in the searching process, where the message length significantly affects the size of the solution space, and (3) issues of application customizability where different data require more flexibility in controlling the embedding process. To overcome these limitations, this study proposes a technique known as an n-decomposition genetic algorithm. This algorithm uses a variable-length search to identify the best location to embed the secret message by incorporating constraints to avoid local minimum traps. The methodology consists of five main phases: (1) initial investigation, (2) formulating an embedding scheme, (3) constructing a decomposition scheme, (4) integrating the schemes' design into the proposed technique, and (5) evaluating the proposed technique's performance based on parameters using medical datasets from kaggle.com. The proposed technique showed resistance to statistical analysis evaluated using Reversible Statistical (RS) analysis and histogram. It also demonstrated its superiority in imperceptibility and security measured by MSE and PSNR to Chest and Retina datasets (0.0557, 0.0550) and (60.6696, 60.7287), respectively. Still, compared to the results obtained by the proposed technique, the benchmark outperforms the Brain dataset due to the homogeneous nature of the images and the extensive black background. This research has contributed to genetic-based decomposition in medical image steganography and provides a technique that offers improved security without compromising efficiency and convergence. However, further validation is required to determine its effectiveness in real-world applications

    The square root law of steganographic capacity for Markov covers.

    No full text
    It is a well-established result that steganographic capacity of perfectly secure stegosystems grows linearly with the number of cover elements-secure steganography has a positive rate. In practice, however, neither the Warden nor the Steganographer has perfect knowledge of the cover source and thus it is unlikely that perfectly secure stegosystems for complex covers, such as digital media, will ever be constructed. This justifies study of secure capacity of imperfect stegosystems. Recent theoretical results from batch steganography, supported by experiments with blind steganalyzers, point to an emerging paradigm: whether steganography is performed in a large batch of cover objects or a single large object, there is a wide range of practical situations in which secure capacity rate is vanishing. In particular, the absolute size of secure payload appears to only grow with the square root of the cover size. In this paper, we study the square root law of steganographic capacity and give a formal proof of this law for imperfect stegosystems, assuming that the cover source is a stationary Markov chain and the embedding changes are mutually independent. © 2009 Copyright SPIE - The International Society for Optical Engineering
    corecore