11 research outputs found

    An information theoretic image steganalysis for LSB steganography

    Get PDF
    Steganography hides the data within a media file in an imperceptible way. Steganalysis exposes steganography by using detection measures. Traditionally, Steganalysis revealed steganography by targeting perceptible and statistical properties which results in developing secure steganography schemes. In this work, we target LSB image steganography by using entropy and joint entropy metrics for steganalysis. First, the Embedded image is processed for feature extraction then analyzed by entropy and joint entropy with their corresponding original image. Second, SVM and Ensemble classifiers are trained according to the analysis results. The decision of classifiers discriminates cover image from stego image. This scheme is further applied on attacked stego image for checking detection reliability. Performance evaluation of proposed scheme is conducted over grayscale image datasets. We analyzed LSB embedded images by Comparing information gain from entropy and joint entropy metrics. Results conclude that entropy of the suspected image is more preserving than joint entropy. As before histogram attack, detection rate with entropy metric is 70% and 98% with joint entropy metric. However after an attack, entropy metric ends with 30% detection rate while joint entropy metric gives 93% detection rate. Therefore, joint entropy proves to be better steganalysis measure with 93% detection accuracy and less false alarms with varying hiding ratio

    Designing Secure and Survivable Stegosystems

    Get PDF
    Steganography, the art and science of carrying out hidden communication, is an emergingsub-discipline of information security. Unlike cryptography, steganography conceals the existenceof a secret message by embedding it in an innocuous container digital media, thereby enablingunobstrusive communication over insecure channels. Detection and extraction of steganographiccontents is another challenge for the information security professional and this activity iscommonly known as steganalysis. Recent progress in steganalysis has posed a challenge fordesign and development of stegosystems with high levels of security and survivability. In thispaper, different strategies have been presented that can be used to escape detection and foilan eavesdropper having high technical capabilities as well as adequate infrastructure. Based onthe strength and weaknesses of current steganographic schemes, ideas have been progressedto make detection and destruction of hidden information more difficult

    PIRANHA: an engine for a methodology of detecting covert communication via image-based steganography

    Get PDF
    In current cutting-edge steganalysis research, model-building and machine learning has been utilized to detect steganography. However, these models are computationally and cognitively cumbersome, and are specifically and exactly targeted to attack one and only one type of steganography. The model built and utilized in this thesis has shown capability in detecting a class or family of steganography, while also demonstrating that it is viable to construct a minimalist model for steganalysis. The notion of detecting steganographic primitives or families is one that has not been discussed in literature, and would serve well as a first-pass steganographic detection methodology. The model built here serves this end well, and it must be kept in mind that the model presented is posited to work as a front-end broad-pass filter for some of the more computationally advanced and directed stganalytic algorithms currently in use. This thesis attempts to convey a view of steganography and steganalysis in a manner more utilitarian and immediately useful to everyday scenarios. This is vastly different from a good many publications that treat the topic as one relegated only to cloak-and-dagger information passing. The subsequent view of steganography as primarily a communications tool useable by petty information brokers and the like directs the text and helps ensure that the notion of steganography as a digital dead-drop box is abandoned in favor of a more grounded approach. As such, the model presented underperforms specialized models that have been presented in current literature, but also makes use of a large image sample space (747 images) as well as images that are contextually diverse and representative of those seen in wide use. In future applications by either law-enforcement or corporate officials, it is hoped that the model presented in this thesis can aid in rapid and targeted responses without causing undue strain upon an eventual human operator. As such, a design constraint that was utilized for this research favored a False Negative as opposed to a False Positive - this methodology helps to ensure that, in the event of an alert, it is worthwhile to apply a more directed attack against the flagged image

    Solving the threat of LSB steganography within data loss prevention systems

    Get PDF
    With the recent spate of data loss breaches from industry and commerce, especially with the large number of Advanced Persistent Threats, companies are increasing their network boundary security. As network defences are enhanced through the use of Data Loss Prevention systems (DLP), attackers seek new ways of exploiting and extracting confidential data. This is often done by internal parties in large-scale organisations through the use of steganography. The successful utilisation of steganography makes the exportation of confidential data hard to detect, equipped with the ability of escaping even the most sophisticated DLP systems. This thesis provides two effective solutions to prevent data loss from effective LSB image steganographic behaviour, with the potential to be applied in industrial DLP systems

    Acta Cybernetica : Volume 24. Number 4.

    Get PDF

    Solving the threat of LSB steganography within data loss prevention systems

    Get PDF
    With the recent spate of data loss breaches from industry and commerce, especially with the large number of Advanced Persistent Threats, companies are increasing their network boundary security. As network defences are enhanced through the use of Data Loss Prevention systems (DLP), attackers seek new ways of exploiting and extracting confidential data. This is often done by internal parties in large-scale organisations through the use of steganography. The successful utilisation of steganography makes the exportation of confidential data hard to detect, equipped with the ability of escaping even the most sophisticated DLP systems. This thesis provides two effective solutions to prevent data loss from effective LSB image steganographic behaviour, with the potential to be applied in industrial DLP systems

    JRevealPEG: A Semi-Blind JPEG Steganalysis Tool Targeting Current Open-Source Embedding Programs

    Get PDF
    Steganography in computer science refers to the hiding of messages or data within other messages or data; the detection of these hidden messages is called steganalysis. Digital steganography can be used to hide any type of file or data, including text, images, audio, and video inside other text, image, audio, or video data. While steganography can be used to legitimately hide data for non-malicious purposes, it is also frequently used in a malicious manner. This paper proposes JRevealPEG, a software tool written in Python that will aid in the detection of steganography in JPEG images with respect to identifying a targeted set of open-source embedding tools. It is hoped that JRevealPEG will assist in furthering the research into effective steganalysis techniques, to ultimately help identify the source of hidden and possibly sensitive or malicious messages, as well as contribute to efforts at thwarting the activities of bad actors

    AN OBJECT-BASED MULTIMEDIA FORENSIC ANALYSIS TOOL

    Get PDF
    With the enormous increase in the use and volume of photographs and videos, multimedia-based digital evidence now plays an increasingly fundamental role in criminal investigations. However, with the increase, it is becoming time-consuming and costly for investigators to analyse content manually. Within the research community, focus on multimedia content has tended to be on highly specialised scenarios such as tattoo identification, number plate recognition, and child exploitation. An investigator’s ability to search multimedia data based on keywords (an approach that already exists within forensic tools for character-based evidence) could provide a simple and effective approach for identifying relevant imagery. This thesis proposes and demonstrates the value of using a multi-algorithmic approach via fusion to achieve the best image annotation performance. The results show that from existing systems, the highest average recall was achieved by Imagga with 53% while the proposed multi-algorithmic system achieved 77% across the select datasets. Subsequently, a novel Object-based Multimedia Forensic Analysis Tool (OM-FAT) architecture was proposed. The OM-FAT automates the identification and extraction of annotation-based evidence from multimedia content. Besides making multimedia data searchable, the OM-FAT system enables investigators to perform various forensic analyses (search using annotations, metadata, object matching, text similarity and geo-tracking) to help investigators understand the relationship between artefacts, thus reducing the time taken to perform an investigation and the investigator’s cognitive load. It will enable investigators to ask higher-level and more abstract questions of the data, then find answers to the essential questions in the investigation: what, who, why, how, when, and where. The research includes a detailed illustration of the architectural requirements, engines, and complete design of the system workflow, which represents a full case management system. To highlight the ease of use and demonstrate the system’s ability to correlate between multimedia, a prototype was developed. The prototype integrates the functionalities of the OM-FAT tool and demonstrates how the system would help digital investigators find pieces of evidence among a large number of images starting from the acquisition stage and ending in the reporting stage with less effort and in less time.The Higher Committee for Education Development in Iraq (HCED

    Intelligent watermarking of long streams of document images

    Get PDF
    Digital watermarking has numerous applications in the imaging domain, including (but not limited to) fingerprinting, authentication, tampering detection. Because of the trade-off between watermark robustness and image quality, the heuristic parameters associated with digital watermarking systems need to be optimized. A common strategy to tackle this optimization problem formulation of digital watermarking, known as intelligent watermarking (IW), is to employ evolutionary computing (EC) to optimize these parameters for each image, with a computational cost that is infeasible for practical applications. However, in industrial applications involving streams of document images, one can expect instances of problems to reappear over time. Therefore, computational cost can be saved by preserving the knowledge of previous optimization problems in a separate archive (memory) and employing that memory to speedup or even replace optimization for future similar problems. That is the basic principle behind the research presented in this thesis. Although similarity in the image space can lead to similarity in the problem space, there is no guarantee of that and for this reason, knowledge about the image space should not be employed whatsoever. Therefore, in this research, strategies to appropriately represent, compare, store and sample from problem instances are investigated. The objective behind these strategies is to allow for a comprehensive representation of a stream of optimization problems in a way to avoid re-optimization whenever a previously seen problem provides solutions as good as those that would be obtained by reoptimization, but at a fraction of its cost. Another objective is to provide IW systems with a predictive capability which allows replacing costly fitness evaluations with cheaper regression models whenever re-optimization cannot be avoided. To this end, IW of streams of document images is first formulated as the problem of optimizing a stream of recurring problems and a Dynamic Particle Swarm Optimization (DPSO) technique is proposed to tackle this problem. This technique is based on a two-tiered memory of static solutions. Memory solutions are re-evaluated for every new image and then, the re-evaluated fitness distribution is compared with stored fitness distribution as a mean of measuring the similarity between both problem instances (change detection). In simulations involving homogeneous streams of bi-tonal document images, the proposed approach resulted in a decrease of 95% in computational burden with little impact in watermarking performace. Optimization cost was severely decreased by replacing re-optimizations with recall to previously seen solutions. After that, the problem of representing the stream of optimization problems in a compact manner is addressed. With that, new optimization concepts can be incorporated into previously learned concepts in an incremental fashion. The proposed strategy to tackle this problem is based on Gaussian Mixture Models (GMM) representation, trained with parameter and fitness data of all intermediate (candidate) solutions of a given problem instance. GMM sampling replaces selection of individual memory solutions during change detection. Simulation results demonstrate that such memory of GMMs is more adaptive and can thus, better tackle the optimization of embedding parameters for heterogeneous streams of document images when compared to the approach based on memory of static solutions. Finally, the knowledge provided by the memory of GMMs is employed as a manner of decreasing the computational cost of re-optimization. To this end, GMM is employed in regression mode during re-optimization, replacing part of the costly fitness evaluations in a strategy known as surrogate-based optimization. Optimization is split in two levels, where the first one relies primarily on regression while the second one relies primarily on exact fitness values and provide a safeguard to the whole system. Simulation results demonstrate that the use of surrogates allows for better adaptation in situations involving significant variations in problem representation as when the set of attacks employed in the fitness function changes. In general lines, the intelligent watermarking system proposed in this thesis is well adapted for the optimization of streams of recurring optimization problems. The quality of the resulting solutions for both, homogeneous and heterogeneous image streams is comparable to that obtained through full optimization but for a fraction of its computational cost. More specifically, the number of fitness evaluations is 97% smaller than that of full optimization for homogeneous streams and 95% for highly heterogeneous streams of document images. The proposed method is general and can be easily adapted to other applications involving streams of recurring problems

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
    corecore