53 research outputs found

    Computational intelligence-based steganalysis comparison for RCM-DWT and PVA-MOD methods

    Get PDF
    This research article proposes data hiding technique for improving the data hiding procedure and securing the data transmission with the help of contrast mapping technique along with advanced data encryption standard. High data hiding capacity, image quality and security are the measures of steganography. Of these three measures, number of bits that can be hidden in a single cover pixel, bits per pixel (bpp), is very important and many researchers are working to improve the bpp. We propose an improved high capacity data hiding method that maintains the acceptable image quality that is more than 30 dB and improves the embedding capacity higher than that of the methods proposed in recent years. The method proposed in this paper uses notational system and achieves higher embedding rate of 4 bpp and also maintain the good visual quality. To measure the efficiency of the proposed information hiding methodology, a simulation system was developed with some of impairments caused by a communication system. PSNR (Peak Signal to Noise ratio) is used to verify the robustness of the images. The proposed research work is verified in accordance to noise analysis. To evaluate the defencing performance during attack RS steganalysis is used

    An Analysis of Perturbed Quantization Steganography in the Spatial Domain

    Get PDF
    Steganography is a form of secret communication in which a message is hidden into a harmless cover object, concealing the actual existence of the message. Due to the potential abuse by criminals and terrorists, much research has also gone into the field of steganalysis - the art of detecting and deciphering a hidden message. As many novel steganographic hiding algorithms become publicly known, researchers exploit these methods by finding statistical irregularities between clean digital images and images containing hidden data. This creates an on-going race between the two fields and requires constant countermeasures on the part of steganographers in order to maintain truly covert communication. This research effort extends upon previous work in perturbed quantization (PQ) steganography by examining its applicability to the spatial domain. Several different information-reducing transformations are implemented along with the PQ system to study their effect on the security of the system as well as their effect on the steganographic capacity of the system. Additionally, a new statistical attack is formulated for detecting ± 1 embedding techniques in color images. Results from performing state-of-the-art steganalysis reveal that the system is less detectable than comparable hiding methods. Grayscale images embedded with message payloads of 0.4bpp are detected only 9% more accurately than by random guessing, and color images embedded with payloads of 0.2bpp are successfully detected only 6% more reliably than by random guessing

    Hunting wild stego images, a domain adaptation problem in digital image forensics

    Get PDF
    Digital image forensics is a field encompassing camera identication, forgery detection and steganalysis. Statistical modeling and machine learning have been successfully applied in the academic community of this maturing field. Still, large gaps exist between academic results and applications used by practicing forensic analysts, especially when the target samples are drawn from a different population than the data in a reference database. This thesis contains four published papers aiming at narrowing this gap in three different fields: mobile stego app detection, digital image steganalysis and camera identification. It is the first work to explore a way of extending the academic methods to real world images created by apps. New ideas and methods are developed for target images with very rich flexibility in the embedding rates, embedding algorithms, exposure settings and camera sources. The experimental results proved that the proposed methods work very well, even for the devices which are not included in the reference database

    Receiver Operating Characteristic (ROC) Graph to Determine the Most Suitable Pairs Analysis Threshold Value

    Get PDF
    Steganography is the art of hiding the information that is going to be sent from one party to another. Information can be hidden into image, text, audio or video. Steganography allowed communication to happen without other people notice there is transmission of message except the intended party. This paper explains the implementation of receiver operating characterictic (ROC) graph addressing the incorrect classification of images for stegogramme and non-stegogramme classes using pairs analysis detection technique. The threshold value to discriminate between the two classes is identified, to reduce the rate of false negative (FN)

    Classifiers and machine learning techniques for image processing and computer vision

    Get PDF
    Orientador: Siome Klein GoldensteinTese (doutorado) - Universidade Estadual de Campinas, Instituto da ComputaçãoResumo: Neste trabalho de doutorado, propomos a utilizaçãoo de classificadores e técnicas de aprendizado de maquina para extrair informações relevantes de um conjunto de dados (e.g., imagens) para solução de alguns problemas em Processamento de Imagens e Visão Computacional. Os problemas de nosso interesse são: categorização de imagens em duas ou mais classes, detecçãao de mensagens escondidas, distinção entre imagens digitalmente adulteradas e imagens naturais, autenticação, multi-classificação, entre outros. Inicialmente, apresentamos uma revisão comparativa e crítica do estado da arte em análise forense de imagens e detecção de mensagens escondidas em imagens. Nosso objetivo é mostrar as potencialidades das técnicas existentes e, mais importante, apontar suas limitações. Com esse estudo, mostramos que boa parte dos problemas nessa área apontam para dois pontos em comum: a seleção de características e as técnicas de aprendizado a serem utilizadas. Nesse estudo, também discutimos questões legais associadas a análise forense de imagens como, por exemplo, o uso de fotografias digitais por criminosos. Em seguida, introduzimos uma técnica para análise forense de imagens testada no contexto de detecção de mensagens escondidas e de classificação geral de imagens em categorias como indoors, outdoors, geradas em computador e obras de arte. Ao estudarmos esse problema de multi-classificação, surgem algumas questões: como resolver um problema multi-classe de modo a poder combinar, por exemplo, caracteríisticas de classificação de imagens baseadas em cor, textura, forma e silhueta, sem nos preocuparmos demasiadamente em como normalizar o vetor-comum de caracteristicas gerado? Como utilizar diversos classificadores diferentes, cada um, especializado e melhor configurado para um conjunto de caracteristicas ou classes em confusão? Nesse sentido, apresentamos, uma tecnica para fusão de classificadores e caracteristicas no cenário multi-classe através da combinação de classificadores binários. Nós validamos nossa abordagem numa aplicação real para classificação automática de frutas e legumes. Finalmente, nos deparamos com mais um problema interessante: como tornar a utilização de poderosos classificadores binarios no contexto multi-classe mais eficiente e eficaz? Assim, introduzimos uma tecnica para combinação de classificadores binarios (chamados classificadores base) para a resolução de problemas no contexto geral de multi-classificação.Abstract: In this work, we propose the use of classifiers and machine learning techniques to extract useful information from data sets (e.g., images) to solve important problems in Image Processing and Computer Vision. We are particularly interested in: two and multi-class image categorization, hidden messages detection, discrimination among natural and forged images, authentication, and multiclassification. To start with, we present a comparative survey of the state-of-the-art in digital image forensics as well as hidden messages detection. Our objective is to show the importance of the existing solutions and discuss their limitations. In this study, we show that most of these techniques strive to solve two common problems in Machine Learning: the feature selection and the classification techniques to be used. Furthermore, we discuss the legal and ethical aspects of image forensics analysis, such as, the use of digital images by criminals. We introduce a technique for image forensics analysis in the context of hidden messages detection and image classification in categories such as indoors, outdoors, computer generated, and art works. From this multi-class classification, we found some important questions: how to solve a multi-class problem in order to combine, for instance, several different features such as color, texture, shape, and silhouette without worrying about the pre-processing and normalization of the combined feature vector? How to take advantage of different classifiers, each one custom tailored to a specific set of classes in confusion? To cope with most of these problems, we present a feature and classifier fusion technique based on combinations of binary classifiers. We validate our solution with a real application for automatic produce classification. Finally, we address another interesting problem: how to combine powerful binary classifiers in the multi-class scenario more effectively? How to boost their efficiency? In this context, we present a solution that boosts the efficiency and effectiveness of multi-class from binary techniques.DoutoradoEngenharia de ComputaçãoDoutor em Ciência da Computaçã

    Adaptive spatial image steganography and steganalysis using perceptual modelling and machine learning

    Get PDF
    Image steganography is a method for communicating secret messages under the cover images. A sender will embed the secret messages into the cover images according to an algorithm, and then the resulting image will be sent to the receiver. The receiver can extract the secret messages with the predefined algorithm. To counter this kind of technique, image steganalysis is proposed to detect the presence of secret messages. After many years of development, current image steganography uses the adaptive algorithm for embedding the secrets, which automatically finds the complex area in the cover source to avoid being noticed. Meanwhile, image steganalysis has also been advanced to universal steganalysis, which does not require the knowledge of the steganographic algorithm. With the development of the computational hardware, i.e., Graphical Processing Units (GPUs), some computational expensive techniques are now available, i.e., Convolutional Neural Networks (CNNs), which bring a large improvement in the detection tasks in image steganalysis. To defend against the attacks, new techniques are also being developed to improve the security of image steganography, these include designing more scientific cost functions, the key in adaptive steganography, and generating stego images from the knowledge of the CNNs. Several contributions are made for both image steganography and steganalysis in this thesis. Firstly, inspired by the Ranking Priority Profile (RPP), a new cost function for adaptive image steganography is proposed, which uses the two-dimensional Singular Spectrum Analysis (2D-SSA) and Weighted Median Filter (WMF) in the design. The RPP mainly includes three rules, i.e., the Complexity-First rule, the Clustering rule and the Spreading rule, to design a cost function. The 2D-SSA is employed in selecting the key components and clustering the embedding positions, which follows the Complexity-First rule and the Clustering rule. Also, the Spreading rule is followed to smooth the resulting image produced by 2D-SSA with WMF. The proposed algorithm has improved performance over four benchmarking approaches against non-shared selection channel attacks. It also provides comparable performance in selection-channel-aware scenarios, where the best results are observed when the relative payload is 0.3 bpp or larger. The approach is much faster than other model-based methods. Secondly, for image steganalysis, to tackle more complex datasets that are close to the real scenarios and to push image steganalysis further to real-life applications, an Enhanced Residual Network with self-attention ability, i.e., ERANet, is proposed. By employing a more mathematically sophisticated way to extract more effective features in the images and the global self-Attention technique, the ERANet can further capture the stego signal in the deeper layers, hence it is suitable for the more complex situations in the new datasets. The proposed Enhanced Low-Level Feature Representation Module can be easily mounted on other CNNs in selecting the most representative features. Although it comes with a slightly extra computational cost, comprehensive experiments on the BOSSbase and ALASKA#2 datasets have demonstrated the effectiveness of the proposed methodology. Lastly, for image steganography, with the knowledge from the CNNs, a novel postcost-optimization algorithm is proposed. Without modifying the original stego image and the original cost function of the steganography, and no need for training a Generative Adversarial Network (GAN), the proposed method mainly uses the gradient maps from a well-trained CNN to represent the cost, where the original cost map of the steganography is adopted to indicate the embedding positions. This method will smooth the gradient maps before adjusting the cost, which solves the boundary problem of the CNNs having multiple subnets. Extensive experiments have been carried out to validate the effectiveness of the proposed method, which provides state-of-the-art performance. In addition, compared to existing work, the proposed method is effcient in computing time as well. In short, this thesis has made three major contributions to image steganography and steganalysis by using perceptual modelling and machine learning. A novel cost function and a post-cost-optimization function have been proposed for adaptive spatial image steganography, which helps protect the secret messages. For image steganalysis, a new CNN architecture has also been proposed, which utilizes multiple techniques for providing state of-the-art performance. Future directions are also discussed for indicating potential research.Image steganography is a method for communicating secret messages under the cover images. A sender will embed the secret messages into the cover images according to an algorithm, and then the resulting image will be sent to the receiver. The receiver can extract the secret messages with the predefined algorithm. To counter this kind of technique, image steganalysis is proposed to detect the presence of secret messages. After many years of development, current image steganography uses the adaptive algorithm for embedding the secrets, which automatically finds the complex area in the cover source to avoid being noticed. Meanwhile, image steganalysis has also been advanced to universal steganalysis, which does not require the knowledge of the steganographic algorithm. With the development of the computational hardware, i.e., Graphical Processing Units (GPUs), some computational expensive techniques are now available, i.e., Convolutional Neural Networks (CNNs), which bring a large improvement in the detection tasks in image steganalysis. To defend against the attacks, new techniques are also being developed to improve the security of image steganography, these include designing more scientific cost functions, the key in adaptive steganography, and generating stego images from the knowledge of the CNNs. Several contributions are made for both image steganography and steganalysis in this thesis. Firstly, inspired by the Ranking Priority Profile (RPP), a new cost function for adaptive image steganography is proposed, which uses the two-dimensional Singular Spectrum Analysis (2D-SSA) and Weighted Median Filter (WMF) in the design. The RPP mainly includes three rules, i.e., the Complexity-First rule, the Clustering rule and the Spreading rule, to design a cost function. The 2D-SSA is employed in selecting the key components and clustering the embedding positions, which follows the Complexity-First rule and the Clustering rule. Also, the Spreading rule is followed to smooth the resulting image produced by 2D-SSA with WMF. The proposed algorithm has improved performance over four benchmarking approaches against non-shared selection channel attacks. It also provides comparable performance in selection-channel-aware scenarios, where the best results are observed when the relative payload is 0.3 bpp or larger. The approach is much faster than other model-based methods. Secondly, for image steganalysis, to tackle more complex datasets that are close to the real scenarios and to push image steganalysis further to real-life applications, an Enhanced Residual Network with self-attention ability, i.e., ERANet, is proposed. By employing a more mathematically sophisticated way to extract more effective features in the images and the global self-Attention technique, the ERANet can further capture the stego signal in the deeper layers, hence it is suitable for the more complex situations in the new datasets. The proposed Enhanced Low-Level Feature Representation Module can be easily mounted on other CNNs in selecting the most representative features. Although it comes with a slightly extra computational cost, comprehensive experiments on the BOSSbase and ALASKA#2 datasets have demonstrated the effectiveness of the proposed methodology. Lastly, for image steganography, with the knowledge from the CNNs, a novel postcost-optimization algorithm is proposed. Without modifying the original stego image and the original cost function of the steganography, and no need for training a Generative Adversarial Network (GAN), the proposed method mainly uses the gradient maps from a well-trained CNN to represent the cost, where the original cost map of the steganography is adopted to indicate the embedding positions. This method will smooth the gradient maps before adjusting the cost, which solves the boundary problem of the CNNs having multiple subnets. Extensive experiments have been carried out to validate the effectiveness of the proposed method, which provides state-of-the-art performance. In addition, compared to existing work, the proposed method is effcient in computing time as well. In short, this thesis has made three major contributions to image steganography and steganalysis by using perceptual modelling and machine learning. A novel cost function and a post-cost-optimization function have been proposed for adaptive spatial image steganography, which helps protect the secret messages. For image steganalysis, a new CNN architecture has also been proposed, which utilizes multiple techniques for providing state of-the-art performance. Future directions are also discussed for indicating potential research

    Information similarity metrics in information security and forensics

    Get PDF
    We study two information similarity measures, relative entropy and the similarity metric, and methods for estimating them. Relative entropy can be readily estimated with existing algorithms based on compression. The similarity metric, based on algorithmic complexity, proves to be more difficult to estimate due to the fact that algorithmic complexity itself is not computable. We again turn to compression for estimating the similarity metric. Previous studies rely on the compression ratio as an indicator for choosing compressors to estimate the similarity metric. This assumption, however, is fundamentally flawed. We propose a new method to benchmark compressors for estimating the similarity metric. To demonstrate its use, we propose to quantify the security of a stegosystem using the similarity metric. Unlike other measures of steganographic security, the similarity metric is not only a true distance metric, but it is also universal in the sense that it is asymptotically minimal among all computable metrics between two objects. Therefore, it accounts for all similarities between two objects. In contrast, relative entropy, a widely accepted steganographic security definition, only takes into consideration the statistical similarity between two random variables. As an application, we present a general method for benchmarking stegosystems. The method is general in the sense that it is not restricted to any covertext medium and therefore, can be applied to a wide range of stegosystems. For demonstration, we analyze several image stegosystems using the newly proposed similarity metric as the security metric. The results show the true security limits of stegosystems regardless of the chosen security metric or the existence of steganalysis detectors. In other words, this makes it possible to show that a stegosystem with a large similarity metric is inherently insecure, even if it has not yet been broken

    Exploiting similarities between secret and cover images for improved embedding efficiency and security in digital steganography

    Get PDF
    The rapid advancements in digital communication technology and huge increase in computer power have generated an exponential growth in the use of the Internet for various commercial, governmental and social interactions that involve transmission of a variety of complex data and multimedia objects. Securing the content of sensitive as well as personal transactions over open networks while ensuring the privacy of information has become essential but increasingly challenging. Therefore, information and multimedia security research area attracts more and more interest, and its scope of applications expands significantly. Communication security mechanisms have been investigated and developed to protect information privacy with Encryption and Steganography providing the two most obvious solutions. Encrypting a secret message transforms it to a noise-like data which is observable but meaningless, while Steganography conceals the very existence of secret information by hiding in mundane communication that does not attract unwelcome snooping. Digital steganography is concerned with using images, videos and audio signals as cover objects for hiding secret bit-streams. Suitability of media files for such purposes is due to the high degree of redundancy as well as being the most widely exchanged digital data. Over the last two decades, there has been a plethora of research that aim to develop new hiding schemes to overcome the variety of challenges relating to imperceptibility of the hidden secrets, payload capacity, efficiency of embedding and robustness against steganalysis attacks. Most existing techniques treat secrets as random bit-streams even when dealing with non-random signals such as images that may add to the toughness of the challenges.This thesis is devoted to investigate and develop steganography schemes for embedding secret images in image files. While many existing schemes have been developed to perform well with respect to one or more of the above objectives, we aim to achieve optimal performance in terms of all these objectives. We shall only be concerned with embedding secret images in the spatial domain of cover images. The main difficulty in addressing the different challenges stems from the fact that the act of embedding results in changing cover image pixel values that cannot be avoided, although these changes may not be easy to detect by the human eye. These pixel changes is a consequence of dissimilarity between the cover LSB plane and the secretimage bit-stream, and result in changes to the statistical parameters of stego-image bit-planes as well as to local image features. Steganalysis tools exploit these effects to model targeted as well as blind attacks. These challenges are usually dealt with by randomising the changes to the LSB, using different/multiple bit-planes to embed one or more secret bits using elaborate schemes, or embedding in certain regions that are noise-tolerant. Our innovative approach to deal with these challenges is first to develop some image procedures and models that result in increasing similarity between the cover image LSB plane and the secret image bit-stream. This will be achieved in two novel steps involving manipulation of both the secret image and the cover image, prior to embedding, that result a higher 0:1 ratio in both the secret bit-stream and the cover pixels‘ LSB plane. For the secret images, we exploit the fact that image pixel values are in general neither uniformly distributed, as is the case of random secrets, nor spatially stationary. We shall develop three secret image pre-processing algorithms to transform the secret image bit-stream for increased 0:1 ratio. Two of these are similar, but one in the spatial domain and the other in the Wavelet domain. In both cases, the most frequent pixels are mapped onto bytes with more 0s. The third method, process blocks by subtracting their means from their pixel values and hence reducing the require number of bits to represent these blocks. In other words, this third algorithm also reduces the length of the secret image bit-stream without loss of information. We shall demonstrate that these algorithms yield a significant increase in the secret image bit-stream 0:1 ratio, the one that based on the Wavelet domain is the best-performing with 80% ratio.For the cover images, we exploit the fact that pixel value decomposition schemes, based on Fibonacci or other defining sequences that differ from the usual binary scheme, expand the number of bit-planes and thereby may help increase the 0:1 ratio in cover image LSB plane. We investigate some such existing techniques and demonstrate that these schemes indeed lead to increased 0:1 ratio in the corresponding cover image LSB plane. We also develop a new extension of the binary decomposition scheme that is the best-performing one with 77% ratio. We exploit the above two steps strategy to propose a bit-plane(s) mapping embedding technique, instead of bit-plane(s) replacement to make each cover pixel usable for secret embedding. This is motivated by the observation that non-binary pixel decomposition schemes also result in decreasing the number of possible patterns for the three first bit-planes to 4 or 5 instead of 8. We shall demonstrate that the combination of the mapping-based embedding scheme and the two steps strategy produces stego-images that have minimal distortion, i.e. reducing the number of the cover pixels changes after message embedding and increasing embedding efficiency. We shall also demonstrate that these schemes result in reasonable stego-image quality and are robust against all the targeted steganalysis tools but not against the blind SRM tool. We shall finally identify possible future work to achieve robustness against SRM at some payload rates and further improve stego-image quality

    Data Hiding and Its Applications

    Get PDF
    Data hiding techniques have been widely used to provide copyright protection, data integrity, covert communication, non-repudiation, and authentication, among other applications. In the context of the increased dissemination and distribution of multimedia content over the internet, data hiding methods, such as digital watermarking and steganography, are becoming increasingly relevant in providing multimedia security. The goal of this book is to focus on the improvement of data hiding algorithms and their different applications (both traditional and emerging), bringing together researchers and practitioners from different research fields, including data hiding, signal processing, cryptography, and information theory, among others
    corecore