155 research outputs found

    Statistical Detection of LSB Matching Using Hypothesis Testing Theory

    Full text link
    This paper investigates the detection of information hidden by the Least Significant Bit (LSB) matching scheme. In a theoretical context of known image media parameters, two important results are presented. First, the use of hypothesis testing theory allows us to design the Most Powerful (MP) test. Second, a study of the MP test gives us the opportunity to analytically calculate its statistical performance in order to warrant a given probability of false-alarm. In practice when detecting LSB matching, the unknown image parameters have to be estimated. Based on the local estimator used in the Weighted Stego-image (WS) detector, a practical test is presented. A numerical comparison with state-of-the-art detectors shows the good performance of the proposed tests and highlights the relevance of the proposed methodology

    LSB steganography with improved embedding efficiency and undetectability

    Get PDF

    An information theoretic image steganalysis for LSB steganography

    Get PDF
    Steganography hides the data within a media file in an imperceptible way. Steganalysis exposes steganography by using detection measures. Traditionally, Steganalysis revealed steganography by targeting perceptible and statistical properties which results in developing secure steganography schemes. In this work, we target LSB image steganography by using entropy and joint entropy metrics for steganalysis. First, the Embedded image is processed for feature extraction then analyzed by entropy and joint entropy with their corresponding original image. Second, SVM and Ensemble classifiers are trained according to the analysis results. The decision of classifiers discriminates cover image from stego image. This scheme is further applied on attacked stego image for checking detection reliability. Performance evaluation of proposed scheme is conducted over grayscale image datasets. We analyzed LSB embedded images by Comparing information gain from entropy and joint entropy metrics. Results conclude that entropy of the suspected image is more preserving than joint entropy. As before histogram attack, detection rate with entropy metric is 70% and 98% with joint entropy metric. However after an attack, entropy metric ends with 30% detection rate while joint entropy metric gives 93% detection rate. Therefore, joint entropy proves to be better steganalysis measure with 93% detection accuracy and less false alarms with varying hiding ratio

    The role of side information in steganography

    Full text link
    Das Ziel digitaler Steganographie ist es, eine geheime Kommunikation in digitalen Medien zu verstecken. Der übliche Ansatz ist es, die Nachricht in einem empirischen Trägermedium zu verstecken. In dieser Arbeit definieren wir den Begriff der Steganographischen Seiteninformation (SSI). Diese Definition umfasst alle wichtigen Eigenschaften von SSI. Wir begründen die Definition informationstheoretisch und erklären den Einsatz von SSI. Alle neueren steganographischen Algorithmen nutzen SSI um die Nachricht einzubetten. Wir entwickeln einen Angriff auf adaptive Steganographie und zeigen anhand von weit verbreiteten SSI-Varianten, dass unser Angriff funktioniert. Wir folgern, dass adaptive Steganographie spieltheoretisch beschrieben werden muss. Wir entwickeln ein spieltheoretisches Modell für solch ein System und berechnen die spieltheoretisch optimalen Strategien. Wir schlussfolgern, dass ein Steganograph diesen Strategien folgen sollte. Zudem entwickeln wir eine neue spieltheoretisch optimale Strategie zur Einbettung, die sogenannten Ausgleichseinbettungsstrategien.The  goal of digital steganography is to hide a secret communication in digital media. The common approach in steganography is to hide the secret messages in empirical cover objects. We are the first to define Steganographic Side Information (SSI). Our definition of SSI captures all relevant properties of SSI. We explain the common usage of SSI. All recent steganographic schemes use SSI to identify suitable areas fot the embedding change. We develop a targeted attack on four widely used variants of SSI, and show that our attack detects them almost perfectly. We argue that the steganographic competition must be framed with means of game theory. We present a game-theoretical framework that captures all relevant properties of such a steganographic system. We instantiate the framework with five different models and solve each of these models for game-theoretically optimal strategies. Inspired by our solutions, we give a new paradigm for secure adaptive steganography, the so-called equalizer embedding strategies

    Theoretical model of the FLD ensemble classifier based on hypothesis testing theory

    Get PDF
    International audienceThe FLD ensemble classifier is a widely used machine learning tool for steganalysis of digital media due to its efficiency when working with high dimensional feature sets. This paper explains how this classifier can be formulated within the framework of optimal detection by using an accurate statistical model of base learners' projections and the hypothesis testing theory. A substantial advantage of this formulation is the ability to theoretically establish the test properties, including the probability of false alarm and the test power, and the flexibility to use other criteria of optimality than the conventional total probability of error. Numerical results on real images show the sharpness of the theoretically established results and the relevance of the proposed methodology

    Noise and chaos contributions in fast random bit sequence generated from broadband optoelectronic entropy sources

    No full text
    International audienceDuring the last 4 years, chaotic waveforms for random number generation found a deep interest within the community of analogue broadband chaotic optical systems. Earlier investigations on chaos-based RNG were proposed in the 90s and early 2000, however mainly based on piecewise linear (PL) 1D map, with bit rate determined by analog electronic processing capabilities to provide the PL nonlinear function of concern. Optical chaos came with promises for much higher bit rate, and entropy sources based on high complexity (high dimensional) continuous time (differential) dynamics. More specifically in 2009, Reidler et al. published a paper entitled "An optical ultrafast random bit generator", in which they presented a physical system for a random number generator based on a chaotic semiconductor laser. This generator is claimed to reach potentially the extremely high rate of 300 Gb/s. We report on analysis and experiments of their method, which leads to the discussion about the actual origin of the obtained randomness. Through standard signal theory arguments, we show that the actual binary randomness quality obtained from this method, can be interpreted as a complex mixing operated on the initial analogue entropy source. Our analysis suggests an explaination about the already reported issue that this method does not necessarily require any specific deterministic property (i.e. chaos) from the physical signal used as the physical source of entropy. The bit stream randomness quality is found to result from "aliasing" phenomena performed by the post-processing method, both for the sampling and the quantization operations. As an illustration, such random bit sequences extracted from different entropy sources are investigated. Optoelectronic noise is used as a non deterministic entropy source. Electro-optic phase chaotic signal, as well as simulations of its deterministic model, are used as deterministic entropy sources. In all cases, the extracted bit sequence reveals excellent randomness

    Information Analysis for Steganography and Steganalysis in 3D Polygonal Meshes

    Get PDF
    Information hiding, which embeds a watermark/message over a cover signal, has recently found extensive applications in, for example, copyright protection, content authentication and covert communication. It has been widely considered as an appealing technology to complement conventional cryptographic processes in the field of multimedia security by embedding information into the signal being protected. Generally, information hiding can be classified into two categories: steganography and watermarking. While steganography attempts to embed as much information as possible into a cover signal, watermarking tries to emphasize the robustness of the embedded information at the expense of embedding capacity. In contrast to information hiding, steganalysis aims at detecting whether a given medium has hidden message in it, and, if possible, recover that hidden message. It can be used to measure the security performance of information hiding techniques, meaning a steganalysis resistant steganographic/watermarking method should be imperceptible not only to Human Vision Systems (HVS), but also to intelligent analysis. As yet, 3D information hiding and steganalysis has received relatively less attention compared to image information hiding, despite the proliferation of 3D computer graphics models which are fairly promising information carriers. This thesis focuses on this relatively neglected research area and has the following primary objectives: 1) to investigate the trade-off between embedding capacity and distortion by considering the correlation between spatial and normal/curvature noise in triangle meshes; 2) to design satisfactory 3D steganographic algorithms, taking into account this trade-off; 3) to design robust 3D watermarking algorithms; 4) to propose a steganalysis framework for detecting the existence of the hidden information in 3D models and introduce a universal 3D steganalytic method under this framework. %and demonstrate the performance of the proposed steganalysis by testing it against six well-known 3D steganographic/watermarking methods. The thesis is organized as follows. Chapter 1 describes in detail the background relating to information hiding and steganalysis, as well as the research problems this thesis will be studying. Chapter 2 conducts a survey on the previous information hiding techniques for digital images, 3D models and other medium and also on image steganalysis algorithms. Motivated by the observation that the knowledge of the spatial accuracy of the mesh vertices does not easily translate into information related to the accuracy of other visually important mesh attributes such as normals, Chapters 3 and 4 investigate the impact of modifying vertex coordinates of 3D triangle models on the mesh normals. Chapter 3 presents the results of an empirical investigation, whereas Chapter 4 presents the results of a theoretical study. Based on these results, a high-capacity 3D steganographic algorithm capable of controlling embedding distortion is also presented in Chapter 4. In addition to normal information, several mesh interrogation, processing and rendering algorithms make direct or indirect use of curvature information. Motivated by this, Chapter 5 studies the relation between Discrete Gaussian Curvature (DGC) degradation and vertex coordinate modifications. Chapter 6 proposes a robust watermarking algorithm for 3D polygonal models, based on modifying the histogram of the distances from the model vertices to a point in 3D space. That point is determined by applying Principal Component Analysis (PCA) to the cover model. The use of PCA makes the watermarking method robust against common 3D operations, such as rotation, translation and vertex reordering. In addition, Chapter 6 develops a 3D specific steganalytic algorithm to detect the existence of the hidden messages embedded by one well-known watermarking method. By contrast, the focus of Chapter 7 will be on developing a 3D watermarking algorithm that is resistant to mesh editing or deformation attacks that change the global shape of the mesh. By adopting a framework which has been successfully developed for image steganalysis, Chapter 8 designs a 3D steganalysis method to detect the existence of messages hidden in 3D models with existing steganographic and watermarking algorithms. The efficiency of this steganalytic algorithm has been evaluated on five state-of-the-art 3D watermarking/steganographic methods. Moreover, being a universal steganalytic algorithm can be used as a benchmark for measuring the anti-steganalysis performance of other existing and most importantly future watermarking/steganographic algorithms. Chapter 9 concludes this thesis and also suggests some potential directions for future work

    Non-Stationary Process Monitoring for Change-Point Detection With Known Accuracy: Application to Wheels Coating Inspection

    Get PDF
    International audienceThis paper addresses the problem of monitoring online a non-stationary process to detect abrupt changes in the process mean value. Two main challenges are addressed: First, the monitored process is nonstationary; i.e., naturally changes over time and it is necessary to distinguish those “regular”process changes from abrupt changes resulting from potential failures. Second, this paper aims at being applied for industrial processes where the performance of the detection method must be accurately controlled. A novel sequential method, based on two fixed-length windows, is proposed to detect abrupt changes with guaranteed accuracy while dealing with non-stationary process. The first window is used for estimating the non-stationary process parameters, whereas the second window is used to execute the detection. A study on the performances of the proposed method provides analytical expressions of the test statistical properties. This allows to bound the false alarm probability for a given number of observations while maximizing the detection power as a function of a given detection delay. The proposed method is then applied for wheels coating monitoring using an imaging system. Numerical results on a large set of wheel images show the efficiency of the proposed approach and the sharpness of the theoretical study

    Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods

    Get PDF
    This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read

    Statistical decision methods in the presence of linear nuisance parameters and despite imaging system heteroscedastic noise: Application to wheel surface inspection

    Get PDF
    International audienceThis paper proposes a novel method for fully automatic anomaly detection on objects inspected using an imaging system. In order to address the inspection of a wide range of objects and to allow the detection of any anomaly, an original adaptive linear parametric model is proposed; The great flexibility of this adaptive model offers highest accuracy for a wide range of complex surfaces while preserving detection of small defects. In addition, because the proposed original model remains linear it allows the application of the hypothesis testing theory to design a test whose statistical performances are analytically known. Another important novelty of this paper is that it takes into account the specific heteroscedastic noise of imaging systems. Indeed, in such systems, the noise level depends on the pixels’ intensity which should be carefully taken into account for providing the proposed test with statistical properties. The proposed detection method is then applied for wheels surface inspection using an imaging system. Due to the nature of the wheels, the different elements are analyzed separately. Numerical results on a large set of real images show both the accuracy of the proposed adaptive model and the sharpness of the ensuing statistical test
    • …
    corecore