170 research outputs found

    Using multiple re-embeddings for quantitative steganalysis and image reliability estimation

    Get PDF
    The quantitative steganalysis problem aims at estimating the amount of payload embedded inside a document. In this paper, JPEG images are considered, and by the use of a re-embedding based methodology, it is possible to estimate the number of original embedding changes performed on the image by a stego source and to slightly improve the estimation regarding classical quantitative steganalysis methods. The major advance of this methodology is that it also enables to obtain a confidence interval on this estimated payload. This confidence interval then permits to evaluate the difficulty of an image, in terms of steganalysis by estimating the reliability of the output. The regression technique comes from the OP-ELM and the reliability is estimated using linear approximation. The methodology is applied with a publicly available stego algorithm, regression model and database of images. The methodology is generic and can be used for any quantitative steganalysis problem of this class

    Steganographer Identification

    Full text link
    Conventional steganalysis detects the presence of steganography within single objects. In the real-world, we may face a complex scenario that one or some of multiple users called actors are guilty of using steganography, which is typically defined as the Steganographer Identification Problem (SIP). One might use the conventional steganalysis algorithms to separate stego objects from cover objects and then identify the guilty actors. However, the guilty actors may be lost due to a number of false alarms. To deal with the SIP, most of the state-of-the-arts use unsupervised learning based approaches. In their solutions, each actor holds multiple digital objects, from which a set of feature vectors can be extracted. The well-defined distances between these feature sets are determined to measure the similarity between the corresponding actors. By applying clustering or outlier detection, the most suspicious actor(s) will be judged as the steganographer(s). Though the SIP needs further study, the existing works have good ability to identify the steganographer(s) when non-adaptive steganographic embedding was applied. In this chapter, we will present foundational concepts and review advanced methodologies in SIP. This chapter is self-contained and intended as a tutorial introducing the SIP in the context of media steganography.Comment: A tutorial with 30 page

    Edge-based image steganography

    Get PDF

    Improve Steganalysis by MWM Feature Selection

    Get PDF

    Performance Evaluation of Exponential Discriminant Analysis with Feature Selection for Steganalysis

    Get PDF
    The performance of supervised learning-based seganalysis depends on the choice of both classifier and features which represent the image. Features extracted from images may contain irrelevant and redundant features which makes them inefficient for machine learning. Relevant features not only decrease the processing time to train a classifier but also provide better generalisation. Linear discriminant classifier which is commonly used for classification may not be able to classify in better way non-linearly separable data. Recently, exponential discriminant analysis, a variant of linear discriminant analysis (LDA), is proposed which transforms the scatter matrices to a new space by distance diffusion mapping. This provides exponential discriminant analysis (EDA) much more discriminant power to classify non-linearly separable data and helps in improving classification accuracy in comparison to LDA. In this paper, the performance of EDA in conjunction with feature selection methods has been investigated. For feature selection, Kullback divergence, Chernoff distance measures and linear regression measures are used to determine relevant features from higher-order statistics of images. The performance is evaluated in terms classification error and computation time. Experimental results show that exponential discriminate analysis in conjunction with linear regression significantly performs better in terms of both classification error and compilation time of training classifier.Defence Science Journal, 2012, 62(1), pp.19-24, DOI:http://dx.doi.org/10.14429/dsj.62.143

    Hunting wild stego images, a domain adaptation problem in digital image forensics

    Get PDF
    Digital image forensics is a field encompassing camera identication, forgery detection and steganalysis. Statistical modeling and machine learning have been successfully applied in the academic community of this maturing field. Still, large gaps exist between academic results and applications used by practicing forensic analysts, especially when the target samples are drawn from a different population than the data in a reference database. This thesis contains four published papers aiming at narrowing this gap in three different fields: mobile stego app detection, digital image steganalysis and camera identification. It is the first work to explore a way of extending the academic methods to real world images created by apps. New ideas and methods are developed for target images with very rich flexibility in the embedding rates, embedding algorithms, exposure settings and camera sources. The experimental results proved that the proposed methods work very well, even for the devices which are not included in the reference database

    Building a dataset for image steganography

    Get PDF
    Image steganography and steganalysis techniques discussed in the literature rely on using a dataset(s)created based on cover images obtained from the public domain, through the acquisition of images from Internet sources, or manually. This issue often leads to challenges in validating, benchmarking, and reproducing reported techniques in a consistent manner. It is our view that the steganography/steganalysis research community would benefit from the availability of common datasets, thus promoting transparency and academic integrity. In this research, we have considered four aspects: image acquisition, pre-processing, steganographic techniques, and embedding rate in building a dataset for image steganography

    Information similarity metrics in information security and forensics

    Get PDF
    We study two information similarity measures, relative entropy and the similarity metric, and methods for estimating them. Relative entropy can be readily estimated with existing algorithms based on compression. The similarity metric, based on algorithmic complexity, proves to be more difficult to estimate due to the fact that algorithmic complexity itself is not computable. We again turn to compression for estimating the similarity metric. Previous studies rely on the compression ratio as an indicator for choosing compressors to estimate the similarity metric. This assumption, however, is fundamentally flawed. We propose a new method to benchmark compressors for estimating the similarity metric. To demonstrate its use, we propose to quantify the security of a stegosystem using the similarity metric. Unlike other measures of steganographic security, the similarity metric is not only a true distance metric, but it is also universal in the sense that it is asymptotically minimal among all computable metrics between two objects. Therefore, it accounts for all similarities between two objects. In contrast, relative entropy, a widely accepted steganographic security definition, only takes into consideration the statistical similarity between two random variables. As an application, we present a general method for benchmarking stegosystems. The method is general in the sense that it is not restricted to any covertext medium and therefore, can be applied to a wide range of stegosystems. For demonstration, we analyze several image stegosystems using the newly proposed similarity metric as the security metric. The results show the true security limits of stegosystems regardless of the chosen security metric or the existence of steganalysis detectors. In other words, this makes it possible to show that a stegosystem with a large similarity metric is inherently insecure, even if it has not yet been broken
    • …
    corecore