221 research outputs found

    Symmetry-Adapted Machine Learning for Information Security

    Get PDF
    Symmetry-adapted machine learning has shown encouraging ability to mitigate the security risks in information and communication technology (ICT) systems. It is a subset of artificial intelligence (AI) that relies on the principles of processing future events by learning past events or historical data. The autonomous nature of symmetry-adapted machine learning supports effective data processing and analysis for security detection in ICT systems without the interference of human authorities. Many industries are developing machine-learning-adapted solutions to support security for smart hardware, distributed computing, and the cloud. In our Special Issue book, we focus on the deployment of symmetry-adapted machine learning for information security in various application areas. This security approach can support effective methods to handle the dynamic nature of security attacks by extraction and analysis of data to identify hidden patterns of data. The main topics of this Issue include malware classification, an intrusion detection system, image watermarking, color image watermarking, battlefield target aggregation behavior recognition model, IP camera, Internet of Things (IoT) security, service function chain, indoor positioning system, and crypto-analysis

    HUBFIRE - A multi-class SVM based JPEG steganalysis using HBCL statistics and FR Index

    Get PDF
    Blind Steganalysis attempts to detect steganographic data without prior knowledge of either the embedding algorithm or the 'cover' image. This paper proposes new features for JPEG blind steganalysis using a combination of Huffman Bit Code Length (HBCL) Statistics and File size to Resolution ratio (FR Index); the Huffman Bit File Index Resolution (HUBFIRE) algorithm proposed uses these functionals to build the classifier using a multi-class Support Vector Machine (SVM). JPEG images spanning a wide range of resolutions are used to create a 'stego-image' database employing three embedding schemes - the advanced Least Significant Bit encoding technique, that embeds in the spatial domain, a transform-domain embedding scheme: JPEG Hide-and-Seek and Model Based Steganography which employs an adaptive embedding technique. This work employs a multi-class SVM over the proposed 'HUBFIRE' algorithm for statistical steganalysis, which is not yet explored by steganalysts. Experiments conducted prove the model's accuracy over a wide range of payloads and embedding schemes

    Watermarking techniques using knowledge of host database

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Improved methods and system for watermarking halftone images

    Get PDF
    Watermarking is becoming increasingly important for content control and authentication. Watermarking seamlessly embeds data in media that provide additional information about that media. Unfortunately, watermarking schemes that have been developed for continuous tone images cannot be directly applied to halftone images. Many of the existing watermarking methods require characteristics that are implicit in continuous tone images, but are absent from halftone images. With this in mind, it seems reasonable to develop watermarking techniques specific to halftones that are equipped to work in the binary image domain. In this thesis, existing techniques for halftone watermarking are reviewed and improvements are developed to increase performance and overcome their limitations. Post-halftone watermarking methods work on existing halftones. Data Hiding Cell Parity (DHCP) embeds data in the parity domain instead of individual pixels. Data Hiding Mask Toggling (DHMT) works by encoding two bits in the 2x2 neighborhood of a pseudorandom location. Dispersed Pseudorandom Generator (DPRG), on the other hand, is a preprocessing step that takes place before image halftoning. DPRG disperses the watermark embedding locations to achieve better visual results. Using the Modified Peak Signal-to-Noise Ratio (MPSNR) metric, the proposed techniques outperform existing methods by up to 5-20%, depending on the image type and method considered. Field programmable gate arrays (FPGAs) are ideal for solutions that require the flexibility of software, while retaining the performance of hardware. Using VHDL, an FPGA based halftone watermarking engine was designed and implemented for the Xilinx Virtex XCV300. This system was designed for watermarking pre-existing halftones and halftones obtained from grayscale images. This design utilizes 99% of the available FPGA resources and runs at 33 MHz. Such a design could be applied to a scanner or printer at the hardware level without adversely affecting performance

    Data Hiding and Its Applications

    Get PDF
    Data hiding techniques have been widely used to provide copyright protection, data integrity, covert communication, non-repudiation, and authentication, among other applications. In the context of the increased dissemination and distribution of multimedia content over the internet, data hiding methods, such as digital watermarking and steganography, are becoming increasingly relevant in providing multimedia security. The goal of this book is to focus on the improvement of data hiding algorithms and their different applications (both traditional and emerging), bringing together researchers and practitioners from different research fields, including data hiding, signal processing, cryptography, and information theory, among others

    Efficient image duplicate detection based on image analysis

    Get PDF
    This thesis is about the detection of duplicated images. More precisely, the developed system is able to discriminate possibly modified copies of original images from other unrelated images. The proposed method is referred to as content-based since it relies only on content analysis techniques rather than using image tagging as done in watermarking. The proposed content-based duplicate detection system classifies a test image by associating it with a label that corresponds to one of the original known images. The classification is performed in four steps. In the first step, the test image is described by using global statistics about its content. In the second step, the most likely original images are efficiently selected using a spatial indexing technique called R-Tree. The third step consists in using binary detectors to estimate the probability that the test image is a duplicate of the original images selected in the second step. Indeed, each original image known to the system is associated with an adapted binary detector, based on a support vector classifier, that estimates the probability that a test image is one of its duplicate. Finally, the fourth and last step consists in choosing the most probable original by picking that with the highest estimated probability. Comparative experiments have shown that the proposed content-based image duplicate detector greatly outperforms detectors using the same image description but based on a simpler distance functions rather than using a classification algorithm. Additional experiments are carried out so as to compare the proposed system with existing state of the art methods. Accordingly, it also outperforms the perceptual distance function method, which uses similar statistics to describe the image. While the proposed method is slightly outperformed by the key points method, it is five to ten times less complex in terms of computational requirements. Finally, note that the nature of this thesis is essentially exploratory since it is one of the first attempts to apply machine learning techniques to the relatively recent field of content-based image duplicate detection

    Efficient image duplicate detection based on image analysis

    Get PDF
    This thesis is about the detection of duplicated images. More precisely, the developed system is able to discriminate possibly modified copies of original images from other unrelated images. The proposed method is referred to as content-based since it relies only on content analysis techniques rather than using image tagging as done in watermarking. The proposed content-based duplicate detection system classifies a test image by associating it with a label that corresponds to one of the original known images. The classification is performed in four steps. In the first step, the test image is described by using global statistics about its content. In the second step, the most likely original images are efficiently selected using a spatial indexing technique called R-Tree. The third step consists in using binary detectors to estimate the probability that the test image is a duplicate of the original images selected in the second step. Indeed, each original image known to the system is associated with an adapted binary detector, based on a support vector classifier, that estimates the probability that a test image is one of its duplicate. Finally, the fourth and last step consists in choosing the most probable original by picking that with the highest estimated probability. Comparative experiments have shown that the proposed content-based image duplicate detector greatly outperforms detectors using the same image description but based on a simpler distance functions rather than using a classification algorithm. Additional experiments are carried out so as to compare the proposed system with existing state of the art methods. Accordingly, it also outperforms the perceptual distance function method, which uses similar statistics to describe the image. While the proposed method is slightly outperformed by the key points method, it is five to ten times less complex in terms of computational requirements. Finally, note that the nature of this thesis is essentially exploratory since it is one of the first attempts to apply machine learning techniques to the relatively recent field of content-based image duplicate detection

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas
    corecore