128 research outputs found
HUBFIRE - A multi-class SVM based JPEG steganalysis using HBCL statistics and FR Index
Blind Steganalysis attempts to detect steganographic data without prior knowledge of either the embedding algorithm or the 'cover' image. This paper proposes new features for JPEG blind steganalysis using a combination of Huffman Bit Code Length (HBCL) Statistics and File size to Resolution ratio (FR Index); the Huffman Bit File Index Resolution (HUBFIRE) algorithm proposed uses these functionals to build the classifier using a multi-class Support Vector Machine (SVM). JPEG images spanning a wide range of resolutions are used to create a 'stego-image' database employing three embedding schemes - the advanced Least Significant Bit encoding technique, that embeds in the spatial domain, a transform-domain embedding scheme: JPEG Hide-and-Seek and Model Based Steganography which employs an adaptive embedding technique. This work employs a multi-class SVM over the proposed 'HUBFIRE' algorithm for statistical steganalysis, which is not yet explored by steganalysts. Experiments conducted prove the model's accuracy over a wide range of payloads and embedding schemes
Hunting wild stego images, a domain adaptation problem in digital image forensics
Digital image forensics is a field encompassing camera identication, forgery detection and steganalysis. Statistical modeling and machine learning have been successfully applied in the academic community of this maturing field. Still, large gaps exist between academic results and applications used by practicing forensic analysts, especially when the target samples are drawn from a different population than the data in a reference database.
This thesis contains four published papers aiming at narrowing this gap in three different fields: mobile stego app detection, digital image steganalysis and camera identification. It is the first work to explore a way of extending the academic methods to real world images created by apps. New ideas and methods are developed for target images with very rich flexibility in the embedding rates, embedding algorithms, exposure settings and camera sources. The experimental results proved that the proposed methods work very well, even for the devices which are not included in the reference database
An Analysis of Perturbed Quantization Steganography in the Spatial Domain
Steganography is a form of secret communication in which a message is hidden into a harmless cover object, concealing the actual existence of the message. Due to the potential abuse by criminals and terrorists, much research has also gone into the field of steganalysis - the art of detecting and deciphering a hidden message. As many novel steganographic hiding algorithms become publicly known, researchers exploit these methods by finding statistical irregularities between clean digital images and images containing hidden data. This creates an on-going race between the two fields and requires constant countermeasures on the part of steganographers in order to maintain truly covert communication. This research effort extends upon previous work in perturbed quantization (PQ) steganography by examining its applicability to the spatial domain. Several different information-reducing transformations are implemented along with the PQ system to study their effect on the security of the system as well as their effect on the steganographic capacity of the system. Additionally, a new statistical attack is formulated for detecting ± 1 embedding techniques in color images. Results from performing state-of-the-art steganalysis reveal that the system is less detectable than comparable hiding methods. Grayscale images embedded with message payloads of 0.4bpp are detected only 9% more accurately than by random guessing, and color images embedded with payloads of 0.2bpp are successfully detected only 6% more reliably than by random guessing
Information similarity metrics in information security and forensics
We study two information similarity measures, relative entropy and the similarity metric, and methods for estimating them. Relative entropy can be readily estimated with existing algorithms based on compression. The similarity metric, based on algorithmic complexity, proves to be more difficult to estimate due to the fact that algorithmic complexity itself is not computable. We again turn to compression for estimating the similarity metric. Previous studies rely on the compression ratio as an indicator for choosing compressors to estimate the similarity metric. This assumption, however, is fundamentally flawed. We propose a new method to benchmark compressors for estimating the similarity metric. To demonstrate its use, we propose to quantify the security of a stegosystem using the similarity metric. Unlike other measures of steganographic security, the similarity metric is not only a true distance metric, but it is also universal in the sense that it is asymptotically minimal among all computable metrics between two objects. Therefore, it accounts for all similarities between two objects. In contrast, relative entropy, a widely accepted steganographic security definition, only takes into consideration the statistical similarity between two random variables. As an application, we present a general method for benchmarking stegosystems. The method is general in the sense that it is not restricted to any covertext medium and therefore, can be applied to a wide range of stegosystems. For demonstration, we analyze several image stegosystems using the newly proposed similarity metric as the security metric. The results show the true security limits of stegosystems regardless of the chosen security metric or the existence of steganalysis detectors. In other words, this makes it possible to show that a stegosystem with a large similarity metric is inherently insecure, even if it has not yet been broken
Recommended from our members
Secure digital documents using Steganography and QR Code
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonWith the increasing use of the Internet several problems have arisen regarding the processing of electronic documents. These include content filtering, content retrieval/search. Moreover, document security has taken a centre stage including copyright protection, broadcast monitoring etc. There is an acute need of an effective tool which can find the identity, location and the time when the document was created so that it can be determined whether or not the contents of the document were tampered with after creation. Owing the sensitivity of the large amounts of data which is processed on a daily basis, verifying the authenticity and integrity of a document is more important now than it ever was. Unsurprisingly document authenticity verification has become the centre of attention in the world of research. Consequently, this research is concerned with creating a tool which deals with the above problem. This research proposes the use of a Quick Response Code as a message carrier for Text Key-print. The Text Key-print is a novel method which employs the basic element of the language (i.e. Characters of the alphabet) in order to achieve authenticity of electronic documents through the transformation of its physical structure into a logical structured relationship. The resultant dimensional matrix is then converted into a binary stream and encapsulated with a serial number or URL inside a Quick response Code (QR code) to form a digital fingerprint mark. For hiding a QR code, two image steganography techniques were developed based upon the spatial and the transform domains. In the spatial domain, three methods were proposed and implemented based on the least significant bit insertion technique and the use of pseudorandom number generator to scatter the message into a set of arbitrary pixels. These methods utilise the three colour channels in the images based on the RGB model based in order to embed one, two or three bits per the eight bit channel which results in three different hiding capacities. The second technique is an adaptive approach in transforming domain where a threshold value is calculated under a predefined location for embedding in order to identify the embedding strength of the embedding technique. The quality of the generated stego images was evaluated using both objective (PSNR) and Subjective (DSCQS) methods to ensure the reliability of our proposed methods. The experimental results revealed that PSNR is not a strong indicator of the perceived stego image quality, but not a bad interpreter also of the actual quality of stego images. Since the visual difference between the cover and the stego image must be absolutely imperceptible to the human visual system, it was logically convenient to ask human observers with different qualifications and experience in the field of image processing to evaluate the perceived quality of the cover and the stego image. Thus, the subjective responses were analysed using statistical measurements to describe the distribution of the scores given by the assessors. Thus, the proposed scheme presents an alternative approach to protect digital documents rather than the traditional techniques of digital signature and watermarking
Natural Image Statistics for Digital Image Forensics
We describe a set of natural image statistics that are built upon two multi-scale image decompositions, the quadrature mirror filter pyramid decomposition and the local angular harmonic decomposition. These image statistics consist of first- and higher-order statistics that capture certain statistical regularities of natural images. We propose to apply these image statistics, together with classification techniques, to three problems in digital image forensics: (1) differentiating photographic images from computer-generated photorealistic images, (2) generic steganalysis; (3) rebroadcast image detection. We also apply these image statistics to the traditional art authentication for forgery detection and identification of artists in an art work. For each application we show the effectiveness of these image statistics and analyze their sensitivity and robustness
Smart techniques and tools to detect Steganography - a viable practice to Security Office Department
Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementInternet is today a commodity and a way for being connect to the world. It is through Internet is where most of the information is shared and where people run their businesses. However, there are some people that make a malicious use of it.
Cyberattacks have been increasing all over the recent years, targeting people and organizations, looking to perform illegal actions. Cyber criminals are always looking for new ways to deliver malware to victims to launch an attack.
Millions of users share images and photos on their social networks and generally users find them safe to use. Contrary to what most people think, images can contain a malicious payload and perform harmful actions.
Steganography is the technique of hiding data, which, combined with media files, can be used to place malicious code. This problem, leveraged by the continuous media file sharing through massive use of digital platforms, may become a worldwide threat in malicious content sharing. Like phishing, people and organizations must be trained to suspect about inappropriate content and implement the proper set of actions to reduce probability of infections when accessing files supposed to be inoffensive.
The aim of this study will try to help people and organizations by trying to set a toolbox where it can be possible to get some tools and techniques to assist in dealing with this kind of situations. A theoretical overview will be performed over other concepts such as Steganalysis, touching also Deep Learning and in Machine Learning to assess which is the range of its applicability in find solutions in detection and facing these situations. In addition, understanding the current main technologies, architectures and users’ hurdles will play an important role in designing and developing the proposed toolbox artifact
- …