26 research outputs found

    Side-Information For Steganography Design And Detection

    Get PDF
    Today, the most secure steganographic schemes for digital images embed secret messages while minimizing a distortion function that describes the local complexity of the content. Distortion functions are heuristically designed to predict the modeling error, or in other words, how difficult it would be to detect a single change to the original image in any given area. This dissertation investigates how both the design and detection of such content-adaptive schemes can be improved with the use of side-information. We distinguish two types of side-information, public and private: Public side-information is available to the sender and at least in part also to anybody else who can observe the communication. Content complexity is a typical example of public side-information. While it is commonly used for steganography, it can also be used for detection. In this work, we propose a modification to the rich-model style feature sets in both spatial and JPEG domain to inform such feature sets of the content complexity. Private side-information is available only to the sender. The previous use of private side-information in steganography was very successful but limited to steganography in JPEG images. Also, the constructions were based on heuristic with little theoretical foundations. This work tries to remedy this deficiency by introducing a scheme that generalizes the previous approach to an arbitrary domain. We also put forward a theoretical investigation of how to incorporate side-information based on a model of images. Third, we propose to use a novel type of side-information in the form of multiple exposures for JPEG steganography

    Enhancing reliability and efficiency for real-time robust adaptive steganography using cyclic redundancy check codes

    Get PDF
    The development of multimedia and deep learning technology bring new challenges to steganography and steganalysis techniques. Meanwhile, robust steganography, as a class of new techniques aiming to solve the problem of covert communication under lossy channels, has become a new research hotspot in the field of information hiding. To improve the communication reliability and efficiency for current real-time robust steganography methods, a concatenated code, composed of Syndrome–Trellis codes (STC) and cyclic redundancy check (CRC) codes, is proposed in this paper. The enhanced robust adaptive steganography framework proposed is this paper is characterized by a strong error detection capability, high coding efficiency, and low embedding costs. On this basis, three adaptive steganographic methods resisting JPEG compression and detection are proposed. Then, the fault tolerance of the proposed steganography methods is analyzed using the residual model of JPEG compression, thus obtaining the appropriate coding parameters. Experimental results show that the proposed methods have a significantly stronger robustness against compression, and are more difficult to be detected by statistical based steganalytic methods

    Image processing algorithms employing two-dimensional Karhunen-Loeve Transform

    Get PDF
    In the fields of image processing and pattern recognition there is an important problem of acquiring, gathering, storing and processing large volumes of data. The most frequently used solution making these data reduced is a compression, which in many cases leads also to the speeding-up further computations. One of the most frequently employed approaches is an image handling by means of Principal Component Analysis and Karhunen-Loeve Transform, which are well known statistical tools used in many areas of applied science. Their main property is the possibility of reducing the volume of data required for its optimal representation while preserving its specific characteristics.The paper presents selected image processing algorithms such as compression, scrambling (coding) and information embedding (steganography) and their realizations employing the twodimensional Karhunen-Loeve Transform (2DKLT), which is superior to the standard, onedimensional KLT since it represents images respecting their spatial properties. The principles of KLT and 2DKLT as well as sample implementations and experiments performed on the standard benchmark datasets are presented. The results show that the 2DKLT employed in the above applications gives obvious advantages in comparison to certain standard algorithms, such as DCT, FFT and wavelets

    Image processing algorithms employing two-dimensional Karhunen-Loeve Transform

    Get PDF
    In the fields of image processing and pattern recognition there is an important problem of acquiring, gathering, storing and processing large volumes of data. The most frequently used solution making these data reduced is a compression, which in many cases leads also to the speeding-up further computations. One of the most frequently employed approaches is an image handling by means of Principal Component Analysis and Karhunen-Loeve Transform, which are well known statistical tools used in many areas of applied science. Their main property is the possibility of reducing the volume of data required for its optimal representation while preserving its specific characteristics.The paper presents selected image processing algorithms such as compression, scrambling (coding) and information embedding (steganography) and their realizations employing the twodimensional Karhunen-Loeve Transform (2DKLT), which is superior to the standard, onedimensional KLT since it represents images respecting their spatial properties. The principles of KLT and 2DKLT as well as sample implementations and experiments performed on the standard benchmark datasets are presented. The results show that the 2DKLT employed in the above applications gives obvious advantages in comparison to certain standard algorithms, such as DCT, FFT and wavelets

    An Overview of Steganography for the Computer Forensics Examiner (Updated Version, February 2015)

    Get PDF
    Steganography is the art of covered or hidden writing. The purpose of steganography is covert communication-to hide the existence of a message from a third party. This paper is intended as a high-level technical introduction to steganography for those unfamiliar with the field. It is directed at forensic computer examiners who need a practical understanding of steganography without delving into the mathematics, although references are provided to some of the ongoing research for the person who needs or wants additional detail. Although this paper provides a historical context for steganography, the emphasis is on digital applications, focusing on hiding information in online image or audio files. Examples of software tools that employ steganography to hide data inside of other files as well as software to detect such hidden files will also be presented. An edited version originally published in the July 2004 issues of Forensic Science Communications

    Natural Image Statistics for Digital Image Forensics

    Get PDF
    We describe a set of natural image statistics that are built upon two multi-scale image decompositions, the quadrature mirror filter pyramid decomposition and the local angular harmonic decomposition. These image statistics consist of first- and higher-order statistics that capture certain statistical regularities of natural images. We propose to apply these image statistics, together with classification techniques, to three problems in digital image forensics: (1) differentiating photographic images from computer-generated photorealistic images, (2) generic steganalysis; (3) rebroadcast image detection. We also apply these image statistics to the traditional art authentication for forgery detection and identification of artists in an art work. For each application we show the effectiveness of these image statistics and analyze their sensitivity and robustness

    Software for efficient file elimination in computer forensics investigations

    Get PDF
    Computer forensics investigators, much more than with any other forensic discipline, must process an ever continuing increase of data. Fortunately, computer processing speed has kept pace and new processes are continuously being automated to sort through the voluminous amount of data. There exists an unfulfilled need for a simple, streamlined, standalone public tool for automating the computer forensics analysis process for files on a hard disk drive under investigation. A software tool has been developed to dramatically reduce the number of files that an investigator must individually examine. This tool utilizes the National Institute of Standards and Technology (NIST) National Software Reference Library (NSRL) database to automatically identify files by comparing hash values of files on the hard drive under investigation to known good files (e.g., unaltered application files) and known bad files (e.g., exploits). This tool then provides a much smaller list of unknown files to be closely examined

    Determining digital image origin using sensor imperfections

    Full text link
    corecore