80 research outputs found

    A Spatial Domain Image Steganography Technique Based on Plane Bit Substitution Method

    Get PDF
    Steganography is the art and science of hiding information by embedding data into cover media. In this paper we propose a new method of information hiding in digital image in spatial domain. In this method we use Plane Bit Substitution Method (PBSM) technique in which message bits are embedded into the pixel value(s) of an image. We first, proposed a Steganography transformation machine (STM) for solving Binary operation for manipulation of original image with help to least significant bit (LSB) operator based matching. Second, we use pixel encryption and decryption techniques under theoretical and experimental evolution. Our experimental, techniques are sufficient to discriminate analysis of stego and cover image as each pixel based PBSM, and operand with LSB

    Performance Evaluation of Exponential Discriminant Analysis with Feature Selection for Steganalysis

    Get PDF
    The performance of supervised learning-based seganalysis depends on the choice of both classifier and features which represent the image. Features extracted from images may contain irrelevant and redundant features which makes them inefficient for machine learning. Relevant features not only decrease the processing time to train a classifier but also provide better generalisation. Linear discriminant classifier which is commonly used for classification may not be able to classify in better way non-linearly separable data. Recently, exponential discriminant analysis, a variant of linear discriminant analysis (LDA), is proposed which transforms the scatter matrices to a new space by distance diffusion mapping. This provides exponential discriminant analysis (EDA) much more discriminant power to classify non-linearly separable data and helps in improving classification accuracy in comparison to LDA. In this paper, the performance of EDA in conjunction with feature selection methods has been investigated. For feature selection, Kullback divergence, Chernoff distance measures and linear regression measures are used to determine relevant features from higher-order statistics of images. The performance is evaluated in terms classification error and computation time. Experimental results show that exponential discriminate analysis in conjunction with linear regression significantly performs better in terms of both classification error and compilation time of training classifier.Defence Science Journal, 2012, 62(1), pp.19-24, DOI:http://dx.doi.org/10.14429/dsj.62.143

    Information similarity metrics in information security and forensics

    Get PDF
    We study two information similarity measures, relative entropy and the similarity metric, and methods for estimating them. Relative entropy can be readily estimated with existing algorithms based on compression. The similarity metric, based on algorithmic complexity, proves to be more difficult to estimate due to the fact that algorithmic complexity itself is not computable. We again turn to compression for estimating the similarity metric. Previous studies rely on the compression ratio as an indicator for choosing compressors to estimate the similarity metric. This assumption, however, is fundamentally flawed. We propose a new method to benchmark compressors for estimating the similarity metric. To demonstrate its use, we propose to quantify the security of a stegosystem using the similarity metric. Unlike other measures of steganographic security, the similarity metric is not only a true distance metric, but it is also universal in the sense that it is asymptotically minimal among all computable metrics between two objects. Therefore, it accounts for all similarities between two objects. In contrast, relative entropy, a widely accepted steganographic security definition, only takes into consideration the statistical similarity between two random variables. As an application, we present a general method for benchmarking stegosystems. The method is general in the sense that it is not restricted to any covertext medium and therefore, can be applied to a wide range of stegosystems. For demonstration, we analyze several image stegosystems using the newly proposed similarity metric as the security metric. The results show the true security limits of stegosystems regardless of the chosen security metric or the existence of steganalysis detectors. In other words, this makes it possible to show that a stegosystem with a large similarity metric is inherently insecure, even if it has not yet been broken

    Building a dataset for image steganography

    Get PDF
    Image steganography and steganalysis techniques discussed in the literature rely on using a dataset(s)created based on cover images obtained from the public domain, through the acquisition of images from Internet sources, or manually. This issue often leads to challenges in validating, benchmarking, and reproducing reported techniques in a consistent manner. It is our view that the steganography/steganalysis research community would benefit from the availability of common datasets, thus promoting transparency and academic integrity. In this research, we have considered four aspects: image acquisition, pre-processing, steganographic techniques, and embedding rate in building a dataset for image steganography

    Hunting wild stego images, a domain adaptation problem in digital image forensics

    Get PDF
    Digital image forensics is a field encompassing camera identication, forgery detection and steganalysis. Statistical modeling and machine learning have been successfully applied in the academic community of this maturing field. Still, large gaps exist between academic results and applications used by practicing forensic analysts, especially when the target samples are drawn from a different population than the data in a reference database. This thesis contains four published papers aiming at narrowing this gap in three different fields: mobile stego app detection, digital image steganalysis and camera identification. It is the first work to explore a way of extending the academic methods to real world images created by apps. New ideas and methods are developed for target images with very rich flexibility in the embedding rates, embedding algorithms, exposure settings and camera sources. The experimental results proved that the proposed methods work very well, even for the devices which are not included in the reference database

    Detecting original image using histogram, DFT and SVM

    Get PDF
    Information hiding for covert communication is rapidly gaining momentum. With sophisticated techniques being developed in steganography, steganalysis needs to be universal. In this paper we propose Universal Steganalysis using Histogram, Discrete Fourier Transform and SVM (SHDFT). The stego image has irregular statistical characteristics as compare to cover image. Using Histogram and DFT, the statistical features are generated to train OneClass SVM to discriminate the cover and stego image. SHDFT algorithm is found to be efficient and fast since the number of statistical features is less compared to the existing algorith

    Methods of covert communication of speech signals based on a bio-inspired principle

    Get PDF
    This work presents two speech hiding methods based on a bio-inspired concept known as the ability of adaptation of speech signals. A cryptographic model uses the adaptation to transform a secret message to a non-sensitive target speech signal, and then, the scrambled speech signal is an intelligible signal. The residual intelligibility is extremely low and it is appropriate to transmit secure speech signals. On the other hand, in a steganographic model, the adapted speech signal is hidden into a host signal by using indirect substitution or direct substitution. In the first case, the scheme is known as Efficient Wavelet Masking (EWM), and in the second case, it is known as improved-EWM (iEWM). While EWM demonstrated to be highly statistical transparent, the second one, iEWM, demonstrated to be highly robust against signal manipulations. Finally, with the purpose to transmit secure speech signals in real-time operation, a hardware-based scheme is proposedEsta tesis presenta dos métodos de comunicación encubierta de señales de voz utilizando un concepto bio-inspirado, conocido como la “habilidad de adaptación de señales de voz”. El modelo de criptografía utiliza la adaptación para transformar un mensaje secreto a una señal de voz no confidencial, obteniendo una señal de voz encriptada legible. Este método es apropiado para transmitir señales de voz seguras porque en la señal encriptada no quedan rastros del mensaje secreto original. En el caso de esteganografía, la señal de voz adaptada se oculta en una señal de voz huésped, utilizando sustitución directa o indirecta. En el primer caso el esquema se denomina EWM y en el segundo caso iEWM. EWM demostró ser altamente transparente, mientras que iEWM demostró ser altamente robusto contra manipulaciones de señal. Finalmente, con el propósito de transmitir señales de voz seguras en tiempo real, se propone un esquema para dispositivos hardware

    Image statistical frameworks for digital image forensics

    Get PDF
    The advances of digital cameras, scanners, printers, image editing tools, smartphones, tablet personal computers as well as high-speed networks have made a digital image a conventional medium for visual information. Creation, duplication, distribution, or tampering of such a medium can be easily done, which calls for the necessity to be able to trace back the authenticity or history of the medium. Digital image forensics is an emerging research area that aims to resolve the imposed problem and has grown in popularity over the past decade. On the other hand, anti-forensics has emerged over the past few years as a relatively new branch of research, aiming at revealing the weakness of the forensic technology. These two sides of research move digital image forensic technologies to the next higher level. Three major contributions are presented in this dissertation as follows. First, an effective multi-resolution image statistical framework for digital image forensics of passive-blind nature is presented in the frequency domain. The image statistical framework is generated by applying Markovian rake transform to image luminance component. Markovian rake transform is the applications of Markov process to difference arrays which are derived from the quantized block discrete cosine transform 2-D arrays with multiple block sizes. The efficacy and universality of the framework is then evaluated in two major applications of digital image forensics: 1) digital image tampering detection; 2) classification of computer graphics and photographic images. Second, a simple yet effective anti-forensic scheme is proposed, capable of obfuscating double JPEG compression artifacts, which may vital information for image forensics, for instance, digital image tampering detection. Shrink-and-zoom (SAZ) attack, the proposed scheme, is simply based on image resizing and bilinear interpolation. The effectiveness of SAZ has been evaluated over two promising double JPEG compression schemes and the outcome reveals that the proposed scheme is effective, especially in the cases that the first quality factor is lower than the second quality factor. Third, an advanced textural image statistical framework in the spatial domain is proposed, utilizing local binary pattern (LBP) schemes to model local image statistics on various kinds of residual images including higher-order ones. The proposed framework can be implemented either in single- or multi-resolution setting depending on the nature of application of interest. The efficacy of the proposed framework is evaluated on two forensic applications: 1) steganalysis with emphasis on HUGO (Highly Undetectable Steganography), an advanced steganographic scheme embedding hidden data in a content-adaptive manner locally into some image regions which are difficult for modeling image statics; 2) image recapture detection (IRD). The outcomes of the evaluations suggest that the proposed framework is effective, not only for detecting local changes which is in line with the nature of HUGO, but also for detecting global difference (the nature of IRD)

    Information Analysis for Steganography and Steganalysis in 3D Polygonal Meshes

    Get PDF
    Information hiding, which embeds a watermark/message over a cover signal, has recently found extensive applications in, for example, copyright protection, content authentication and covert communication. It has been widely considered as an appealing technology to complement conventional cryptographic processes in the field of multimedia security by embedding information into the signal being protected. Generally, information hiding can be classified into two categories: steganography and watermarking. While steganography attempts to embed as much information as possible into a cover signal, watermarking tries to emphasize the robustness of the embedded information at the expense of embedding capacity. In contrast to information hiding, steganalysis aims at detecting whether a given medium has hidden message in it, and, if possible, recover that hidden message. It can be used to measure the security performance of information hiding techniques, meaning a steganalysis resistant steganographic/watermarking method should be imperceptible not only to Human Vision Systems (HVS), but also to intelligent analysis. As yet, 3D information hiding and steganalysis has received relatively less attention compared to image information hiding, despite the proliferation of 3D computer graphics models which are fairly promising information carriers. This thesis focuses on this relatively neglected research area and has the following primary objectives: 1) to investigate the trade-off between embedding capacity and distortion by considering the correlation between spatial and normal/curvature noise in triangle meshes; 2) to design satisfactory 3D steganographic algorithms, taking into account this trade-off; 3) to design robust 3D watermarking algorithms; 4) to propose a steganalysis framework for detecting the existence of the hidden information in 3D models and introduce a universal 3D steganalytic method under this framework. %and demonstrate the performance of the proposed steganalysis by testing it against six well-known 3D steganographic/watermarking methods. The thesis is organized as follows. Chapter 1 describes in detail the background relating to information hiding and steganalysis, as well as the research problems this thesis will be studying. Chapter 2 conducts a survey on the previous information hiding techniques for digital images, 3D models and other medium and also on image steganalysis algorithms. Motivated by the observation that the knowledge of the spatial accuracy of the mesh vertices does not easily translate into information related to the accuracy of other visually important mesh attributes such as normals, Chapters 3 and 4 investigate the impact of modifying vertex coordinates of 3D triangle models on the mesh normals. Chapter 3 presents the results of an empirical investigation, whereas Chapter 4 presents the results of a theoretical study. Based on these results, a high-capacity 3D steganographic algorithm capable of controlling embedding distortion is also presented in Chapter 4. In addition to normal information, several mesh interrogation, processing and rendering algorithms make direct or indirect use of curvature information. Motivated by this, Chapter 5 studies the relation between Discrete Gaussian Curvature (DGC) degradation and vertex coordinate modifications. Chapter 6 proposes a robust watermarking algorithm for 3D polygonal models, based on modifying the histogram of the distances from the model vertices to a point in 3D space. That point is determined by applying Principal Component Analysis (PCA) to the cover model. The use of PCA makes the watermarking method robust against common 3D operations, such as rotation, translation and vertex reordering. In addition, Chapter 6 develops a 3D specific steganalytic algorithm to detect the existence of the hidden messages embedded by one well-known watermarking method. By contrast, the focus of Chapter 7 will be on developing a 3D watermarking algorithm that is resistant to mesh editing or deformation attacks that change the global shape of the mesh. By adopting a framework which has been successfully developed for image steganalysis, Chapter 8 designs a 3D steganalysis method to detect the existence of messages hidden in 3D models with existing steganographic and watermarking algorithms. The efficiency of this steganalytic algorithm has been evaluated on five state-of-the-art 3D watermarking/steganographic methods. Moreover, being a universal steganalytic algorithm can be used as a benchmark for measuring the anti-steganalysis performance of other existing and most importantly future watermarking/steganographic algorithms. Chapter 9 concludes this thesis and also suggests some potential directions for future work

    Machine learning based digital image forensics and steganalysis

    Get PDF
    The security and trustworthiness of digital images have become crucial issues due to the simplicity of malicious processing. Therefore, the research on image steganalysis (determining if a given image has secret information hidden inside) and image forensics (determining the origin and authenticity of a given image and revealing the processing history the image has gone through) has become crucial to the digital society. In this dissertation, the steganalysis and forensics of digital images are treated as pattern classification problems so as to make advanced machine learning (ML) methods applicable. Three topics are covered: (1) architectural design of convolutional neural networks (CNNs) for steganalysis, (2) statistical feature extraction for camera model classification, and (3) real-world tampering detection and localization. For covert communications, steganography is used to embed secret messages into images by altering pixel values slightly. Since advanced steganography alters the pixel values in the image regions that are hard to be detected, the traditional ML-based steganalytic methods heavily relied on sophisticated manual feature design have been pushed to the limit. To overcome this difficulty, in-depth studies are conducted and reported in this dissertation so as to move the success achieved by the CNNs in computer vision to steganalysis. The outcomes achieved and reported in this dissertation are: (1) a proposed CNN architecture incorporating the domain knowledge of steganography and steganalysis, and (2) ensemble methods of the CNNs for steganalysis. The proposed CNN is currently one of the best classifiers against steganography. Camera model classification from images aims at assigning a given image to its source capturing camera model based on the statistics of image pixel values. For this, two types of statistical features are designed to capture the traces left by in-camera image processing algorithms. The first is Markov transition probabilities modeling block-DCT coefficients for JPEG images; the second is based on histograms of local binary patterns obtained in both the spatial and wavelet domains. The designed features serve as the input to train support vector machines, which have the best classification performance at the time the features are proposed. The last part of this dissertation documents the solutions delivered by the author’s team to The First Image Forensics Challenge organized by the Information Forensics and Security Technical Committee of the IEEE Signal Processing Society. In the competition, all the fake images involved were doctored by popular image-editing software to simulate the real-world scenario of tampering detection (determine if a given image has been tampered or not) and localization (determine which pixels have been tampered). In Phase-1 of the Challenge, advanced steganalysis features were successfully migrated to tampering detection. In Phase-2 of the Challenge, an efficient copy-move detector equipped with PatchMatch as a fast approximate nearest neighbor searching method were developed to identify duplicated regions within images. With these tools, the author’s team won the runner-up prizes in both the two phases of the Challenge
    corecore