20 research outputs found

    A Framework for Multimedia Data Hiding (Security)

    Get PDF
    With the proliferation of multimedia data such as images, audio, and video, robust digital watermarking and data hiding techniques are needed for copyright protection, copy control, annotation, and authentication. While many techniques have been proposed for digital color and grayscale images, not all of them can be directly applied to binary document images. The difficulty lies in the fact that changing pixel values in a binary document could introduce Irregularities that is very visually noticeable. We have seen but limited number of papers proposing new techniques and ideas for document image watermarking and data hiding. In this paper, we present an overview and summary of recent developments on this important topic, and discuss important issues such as robustness and data hiding capacity of the different techniques

    Information similarity metrics in information security and forensics

    Get PDF
    We study two information similarity measures, relative entropy and the similarity metric, and methods for estimating them. Relative entropy can be readily estimated with existing algorithms based on compression. The similarity metric, based on algorithmic complexity, proves to be more difficult to estimate due to the fact that algorithmic complexity itself is not computable. We again turn to compression for estimating the similarity metric. Previous studies rely on the compression ratio as an indicator for choosing compressors to estimate the similarity metric. This assumption, however, is fundamentally flawed. We propose a new method to benchmark compressors for estimating the similarity metric. To demonstrate its use, we propose to quantify the security of a stegosystem using the similarity metric. Unlike other measures of steganographic security, the similarity metric is not only a true distance metric, but it is also universal in the sense that it is asymptotically minimal among all computable metrics between two objects. Therefore, it accounts for all similarities between two objects. In contrast, relative entropy, a widely accepted steganographic security definition, only takes into consideration the statistical similarity between two random variables. As an application, we present a general method for benchmarking stegosystems. The method is general in the sense that it is not restricted to any covertext medium and therefore, can be applied to a wide range of stegosystems. For demonstration, we analyze several image stegosystems using the newly proposed similarity metric as the security metric. The results show the true security limits of stegosystems regardless of the chosen security metric or the existence of steganalysis detectors. In other words, this makes it possible to show that a stegosystem with a large similarity metric is inherently insecure, even if it has not yet been broken

    Triple scheme based on image steganography to improve imperceptibility and security

    Get PDF
    A foremost priority in the information technology and communication era is achieving an effective and secure steganography scheme when considering information hiding. Commonly, the digital images are used as the cover for the steganography owing to their redundancy in the representation, making them hidden to the intruders. Nevertheless, any steganography system launched over the internet can be attacked upon recognizing the stego cover. Presently, the design and development of an effective image steganography system are facing several challenging issues including the low capacity, poor security, and imperceptibility. Towards overcoming the aforementioned issues, a new decomposition scheme was proposed for image steganography with a new approach known as a Triple Number Approach (TNA). In this study, three main stages were used to achieve objectives and overcome the issues of image steganography, beginning with image and text preparation, followed by embedding and culminating in extraction. Finally, the evaluation stage employed several evaluations in order to benchmark the results. Different contributions were presented with this study. The first contribution was a Triple Text Coding Method (TTCM), which was related to the preparation of secret messages prior to the embedding process. The second contribution was a Triple Embedding Method (TEM), which was related to the embedding process. The third contribution was related to security criteria which were based on a new partitioning of an image known as the Image Partitioning Method (IPM). The IPM proposed a random pixel selection, based on image partitioning into three phases with three iterations of the Hénon Map function. An enhanced Huffman coding algorithm was utilized to compress the secret message before TTCM process. A standard dataset from the Signal and Image Processing Institute (SIPI) containing color and grayscale images with 512 x 512 pixels were utilised in this study. Different parameters were used to test the performance of the proposed scheme based on security and imperceptibility (image quality). In image quality, four important measurements that were used are Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Mean Square Error (MSE) and Histogram analysis. Whereas, two security measurements that were used are Human Visual System (HVS) and Chi-square (X2) attacks. In terms of PSNR and SSIM, the Lena grayscale image obtained results were 78.09 and 1 dB, respectively. Meanwhile, the HVS and X2 attacks obtained high results when compared to the existing scheme in the literature. Based on the findings, the proposed scheme give evidence to increase capacity, imperceptibility, and security to overcome existing issues

    Buyer-seller watermarking protocol in digital cinema

    Get PDF
    Master'sMASTER OF SCIENC

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Exploiting similarities between secret and cover images for improved embedding efficiency and security in digital steganography

    Get PDF
    The rapid advancements in digital communication technology and huge increase in computer power have generated an exponential growth in the use of the Internet for various commercial, governmental and social interactions that involve transmission of a variety of complex data and multimedia objects. Securing the content of sensitive as well as personal transactions over open networks while ensuring the privacy of information has become essential but increasingly challenging. Therefore, information and multimedia security research area attracts more and more interest, and its scope of applications expands significantly. Communication security mechanisms have been investigated and developed to protect information privacy with Encryption and Steganography providing the two most obvious solutions. Encrypting a secret message transforms it to a noise-like data which is observable but meaningless, while Steganography conceals the very existence of secret information by hiding in mundane communication that does not attract unwelcome snooping. Digital steganography is concerned with using images, videos and audio signals as cover objects for hiding secret bit-streams. Suitability of media files for such purposes is due to the high degree of redundancy as well as being the most widely exchanged digital data. Over the last two decades, there has been a plethora of research that aim to develop new hiding schemes to overcome the variety of challenges relating to imperceptibility of the hidden secrets, payload capacity, efficiency of embedding and robustness against steganalysis attacks. Most existing techniques treat secrets as random bit-streams even when dealing with non-random signals such as images that may add to the toughness of the challenges.This thesis is devoted to investigate and develop steganography schemes for embedding secret images in image files. While many existing schemes have been developed to perform well with respect to one or more of the above objectives, we aim to achieve optimal performance in terms of all these objectives. We shall only be concerned with embedding secret images in the spatial domain of cover images. The main difficulty in addressing the different challenges stems from the fact that the act of embedding results in changing cover image pixel values that cannot be avoided, although these changes may not be easy to detect by the human eye. These pixel changes is a consequence of dissimilarity between the cover LSB plane and the secretimage bit-stream, and result in changes to the statistical parameters of stego-image bit-planes as well as to local image features. Steganalysis tools exploit these effects to model targeted as well as blind attacks. These challenges are usually dealt with by randomising the changes to the LSB, using different/multiple bit-planes to embed one or more secret bits using elaborate schemes, or embedding in certain regions that are noise-tolerant. Our innovative approach to deal with these challenges is first to develop some image procedures and models that result in increasing similarity between the cover image LSB plane and the secret image bit-stream. This will be achieved in two novel steps involving manipulation of both the secret image and the cover image, prior to embedding, that result a higher 0:1 ratio in both the secret bit-stream and the cover pixels‘ LSB plane. For the secret images, we exploit the fact that image pixel values are in general neither uniformly distributed, as is the case of random secrets, nor spatially stationary. We shall develop three secret image pre-processing algorithms to transform the secret image bit-stream for increased 0:1 ratio. Two of these are similar, but one in the spatial domain and the other in the Wavelet domain. In both cases, the most frequent pixels are mapped onto bytes with more 0s. The third method, process blocks by subtracting their means from their pixel values and hence reducing the require number of bits to represent these blocks. In other words, this third algorithm also reduces the length of the secret image bit-stream without loss of information. We shall demonstrate that these algorithms yield a significant increase in the secret image bit-stream 0:1 ratio, the one that based on the Wavelet domain is the best-performing with 80% ratio.For the cover images, we exploit the fact that pixel value decomposition schemes, based on Fibonacci or other defining sequences that differ from the usual binary scheme, expand the number of bit-planes and thereby may help increase the 0:1 ratio in cover image LSB plane. We investigate some such existing techniques and demonstrate that these schemes indeed lead to increased 0:1 ratio in the corresponding cover image LSB plane. We also develop a new extension of the binary decomposition scheme that is the best-performing one with 77% ratio. We exploit the above two steps strategy to propose a bit-plane(s) mapping embedding technique, instead of bit-plane(s) replacement to make each cover pixel usable for secret embedding. This is motivated by the observation that non-binary pixel decomposition schemes also result in decreasing the number of possible patterns for the three first bit-planes to 4 or 5 instead of 8. We shall demonstrate that the combination of the mapping-based embedding scheme and the two steps strategy produces stego-images that have minimal distortion, i.e. reducing the number of the cover pixels changes after message embedding and increasing embedding efficiency. We shall also demonstrate that these schemes result in reasonable stego-image quality and are robust against all the targeted steganalysis tools but not against the blind SRM tool. We shall finally identify possible future work to achieve robustness against SRM at some payload rates and further improve stego-image quality

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas
    corecore