235 research outputs found

    Steganographer Identification

    Full text link
    Conventional steganalysis detects the presence of steganography within single objects. In the real-world, we may face a complex scenario that one or some of multiple users called actors are guilty of using steganography, which is typically defined as the Steganographer Identification Problem (SIP). One might use the conventional steganalysis algorithms to separate stego objects from cover objects and then identify the guilty actors. However, the guilty actors may be lost due to a number of false alarms. To deal with the SIP, most of the state-of-the-arts use unsupervised learning based approaches. In their solutions, each actor holds multiple digital objects, from which a set of feature vectors can be extracted. The well-defined distances between these feature sets are determined to measure the similarity between the corresponding actors. By applying clustering or outlier detection, the most suspicious actor(s) will be judged as the steganographer(s). Though the SIP needs further study, the existing works have good ability to identify the steganographer(s) when non-adaptive steganographic embedding was applied. In this chapter, we will present foundational concepts and review advanced methodologies in SIP. This chapter is self-contained and intended as a tutorial introducing the SIP in the context of media steganography.Comment: A tutorial with 30 page

    Side-Information For Steganography Design And Detection

    Get PDF
    Today, the most secure steganographic schemes for digital images embed secret messages while minimizing a distortion function that describes the local complexity of the content. Distortion functions are heuristically designed to predict the modeling error, or in other words, how difficult it would be to detect a single change to the original image in any given area. This dissertation investigates how both the design and detection of such content-adaptive schemes can be improved with the use of side-information. We distinguish two types of side-information, public and private: Public side-information is available to the sender and at least in part also to anybody else who can observe the communication. Content complexity is a typical example of public side-information. While it is commonly used for steganography, it can also be used for detection. In this work, we propose a modification to the rich-model style feature sets in both spatial and JPEG domain to inform such feature sets of the content complexity. Private side-information is available only to the sender. The previous use of private side-information in steganography was very successful but limited to steganography in JPEG images. Also, the constructions were based on heuristic with little theoretical foundations. This work tries to remedy this deficiency by introducing a scheme that generalizes the previous approach to an arbitrary domain. We also put forward a theoretical investigation of how to incorporate side-information based on a model of images. Third, we propose to use a novel type of side-information in the form of multiple exposures for JPEG steganography

    Machine learning based digital image forensics and steganalysis

    Get PDF
    The security and trustworthiness of digital images have become crucial issues due to the simplicity of malicious processing. Therefore, the research on image steganalysis (determining if a given image has secret information hidden inside) and image forensics (determining the origin and authenticity of a given image and revealing the processing history the image has gone through) has become crucial to the digital society. In this dissertation, the steganalysis and forensics of digital images are treated as pattern classification problems so as to make advanced machine learning (ML) methods applicable. Three topics are covered: (1) architectural design of convolutional neural networks (CNNs) for steganalysis, (2) statistical feature extraction for camera model classification, and (3) real-world tampering detection and localization. For covert communications, steganography is used to embed secret messages into images by altering pixel values slightly. Since advanced steganography alters the pixel values in the image regions that are hard to be detected, the traditional ML-based steganalytic methods heavily relied on sophisticated manual feature design have been pushed to the limit. To overcome this difficulty, in-depth studies are conducted and reported in this dissertation so as to move the success achieved by the CNNs in computer vision to steganalysis. The outcomes achieved and reported in this dissertation are: (1) a proposed CNN architecture incorporating the domain knowledge of steganography and steganalysis, and (2) ensemble methods of the CNNs for steganalysis. The proposed CNN is currently one of the best classifiers against steganography. Camera model classification from images aims at assigning a given image to its source capturing camera model based on the statistics of image pixel values. For this, two types of statistical features are designed to capture the traces left by in-camera image processing algorithms. The first is Markov transition probabilities modeling block-DCT coefficients for JPEG images; the second is based on histograms of local binary patterns obtained in both the spatial and wavelet domains. The designed features serve as the input to train support vector machines, which have the best classification performance at the time the features are proposed. The last part of this dissertation documents the solutions delivered by the author’s team to The First Image Forensics Challenge organized by the Information Forensics and Security Technical Committee of the IEEE Signal Processing Society. In the competition, all the fake images involved were doctored by popular image-editing software to simulate the real-world scenario of tampering detection (determine if a given image has been tampered or not) and localization (determine which pixels have been tampered). In Phase-1 of the Challenge, advanced steganalysis features were successfully migrated to tampering detection. In Phase-2 of the Challenge, an efficient copy-move detector equipped with PatchMatch as a fast approximate nearest neighbor searching method were developed to identify duplicated regions within images. With these tools, the author’s team won the runner-up prizes in both the two phases of the Challenge

    Steganography and Steganalysis in Digital Multimedia: Hype or Hallelujah?

    Get PDF
    In this tutorial, we introduce the basic theory behind Steganography and Steganalysis, and present some recent algorithms and developments of these fields. We show how the existing techniques used nowadays are related to Image Processing and Computer Vision, point out several trendy applications of Steganography and Steganalysis, and list a few great research opportunities just waiting to be addressed.In this tutorial, we introduce the basic theory behind Steganography and Steganalysis, and present some recent algorithms and developments of these fields. We show how the existing techniques used nowadays are related to Image Processing and Computer Vision, point out several trendy applications of Steganography and Steganalysis, and list a few great research opportunities just waiting to be addressed

    Recent Advances in Steganography

    Get PDF
    Steganography is the art and science of communicating which hides the existence of the communication. Steganographic technologies are an important part of the future of Internet security and privacy on open systems such as the Internet. This book's focus is on a relatively new field of study in Steganography and it takes a look at this technology by introducing the readers various concepts of Steganography and Steganalysis. The book has a brief history of steganography and it surveys steganalysis methods considering their modeling techniques. Some new steganography techniques for hiding secret data in images are presented. Furthermore, steganography in speeches is reviewed, and a new approach for hiding data in speeches is introduced

    Image statistical frameworks for digital image forensics

    Get PDF
    The advances of digital cameras, scanners, printers, image editing tools, smartphones, tablet personal computers as well as high-speed networks have made a digital image a conventional medium for visual information. Creation, duplication, distribution, or tampering of such a medium can be easily done, which calls for the necessity to be able to trace back the authenticity or history of the medium. Digital image forensics is an emerging research area that aims to resolve the imposed problem and has grown in popularity over the past decade. On the other hand, anti-forensics has emerged over the past few years as a relatively new branch of research, aiming at revealing the weakness of the forensic technology. These two sides of research move digital image forensic technologies to the next higher level. Three major contributions are presented in this dissertation as follows. First, an effective multi-resolution image statistical framework for digital image forensics of passive-blind nature is presented in the frequency domain. The image statistical framework is generated by applying Markovian rake transform to image luminance component. Markovian rake transform is the applications of Markov process to difference arrays which are derived from the quantized block discrete cosine transform 2-D arrays with multiple block sizes. The efficacy and universality of the framework is then evaluated in two major applications of digital image forensics: 1) digital image tampering detection; 2) classification of computer graphics and photographic images. Second, a simple yet effective anti-forensic scheme is proposed, capable of obfuscating double JPEG compression artifacts, which may vital information for image forensics, for instance, digital image tampering detection. Shrink-and-zoom (SAZ) attack, the proposed scheme, is simply based on image resizing and bilinear interpolation. The effectiveness of SAZ has been evaluated over two promising double JPEG compression schemes and the outcome reveals that the proposed scheme is effective, especially in the cases that the first quality factor is lower than the second quality factor. Third, an advanced textural image statistical framework in the spatial domain is proposed, utilizing local binary pattern (LBP) schemes to model local image statistics on various kinds of residual images including higher-order ones. The proposed framework can be implemented either in single- or multi-resolution setting depending on the nature of application of interest. The efficacy of the proposed framework is evaluated on two forensic applications: 1) steganalysis with emphasis on HUGO (Highly Undetectable Steganography), an advanced steganographic scheme embedding hidden data in a content-adaptive manner locally into some image regions which are difficult for modeling image statics; 2) image recapture detection (IRD). The outcomes of the evaluations suggest that the proposed framework is effective, not only for detecting local changes which is in line with the nature of HUGO, but also for detecting global difference (the nature of IRD)

    Edge-based image steganography

    Get PDF

    RABS: Rule-Based Adaptive Batch Steganography

    Get PDF
    • …
    corecore