58 research outputs found

    Digital watermark technology in security applications

    Get PDF
    With the rising emphasis on security and the number of fraud related crimes around the world, authorities are looking for new technologies to tighten security of identity. Among many modern electronic technologies, digital watermarking has unique advantages to enhance the document authenticity. At the current status of the development, digital watermarking technologies are not as matured as other competing technologies to support identity authentication systems. This work presents improvements in performance of two classes of digital watermarking techniques and investigates the issue of watermark synchronisation. Optimal performance can be obtained if the spreading sequences are designed to be orthogonal to the cover vector. In this thesis, two classes of orthogonalisation methods that generate binary sequences quasi-orthogonal to the cover vector are presented. One method, namely "Sorting and Cancelling" generates sequences that have a high level of orthogonality to the cover vector. The Hadamard Matrix based orthogonalisation method, namely "Hadamard Matrix Search" is able to realise overlapped embedding, thus the watermarking capacity and image fidelity can be improved compared to using short watermark sequences. The results are compared with traditional pseudo-randomly generated binary sequences. The advantages of both classes of orthogonalisation inethods are significant. Another watermarking method that is introduced in the thesis is based on writing-on-dirty-paper theory. The method is presented with biorthogonal codes that have the best robustness. The advantage and trade-offs of using biorthogonal codes with this watermark coding methods are analysed comprehensively. The comparisons between orthogonal and non-orthogonal codes that are used in this watermarking method are also made. It is found that fidelity and robustness are contradictory and it is not possible to optimise them simultaneously. Comparisons are also made between all proposed methods. The comparisons are focused on three major performance criteria, fidelity, capacity and robustness. aom two different viewpoints, conclusions are not the same. For fidelity-centric viewpoint, the dirty-paper coding methods using biorthogonal codes has very strong advantage to preserve image fidelity and the advantage of capacity performance is also significant. However, from the power ratio point of view, the orthogonalisation methods demonstrate significant advantage on capacity and robustness. The conclusions are contradictory but together, they summarise the performance generated by different design considerations. The synchronisation of watermark is firstly provided by high contrast frames around the watermarked image. The edge detection filters are used to detect the high contrast borders of the captured image. By scanning the pixels from the border to the centre, the locations of detected edges are stored. The optimal linear regression algorithm is used to estimate the watermarked image frames. Estimation of the regression function provides rotation angle as the slope of the rotated frames. The scaling is corrected by re-sampling the upright image to the original size. A theoretically studied method that is able to synchronise captured image to sub-pixel level accuracy is also presented. By using invariant transforms and the "symmetric phase only matched filter" the captured image can be corrected accurately to original geometric size. The method uses repeating watermarks to form an array in the spatial domain of the watermarked image and the the array that the locations of its elements can reveal information of rotation, translation and scaling with two filtering processes

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    A study and some experimental work of digital image and video watermarking

    Get PDF
    The rapid growth of digitized media and the emergence of digital networks have created a pressing need for copyright protection and anonymous communications schemes. Digital watermarking (or data hiding in a more general term) is a kind of steganography technique by adding information into a digital data stream. Several most important watermarking schemes applied to multilevel and binary still images and digital videos were studied. They include schemes based on DCT (Discrete Cosine Transform), DWT (Discrete Wavelet Transform), and fractal transforms. The question whether these invisible watermarking techniques can resolve the issue of rightful ownership of intellectual properties was discussed. The watermarking schemes were further studied from malicious attack point of view, which is considered an effective way to advance the watermarking techniques. In particular, the StirMark robustness tests based on geometrical distortion were carried out. A binary watermarking scheme applied in the DCT domain is presented in this research project. The effect of the binarization procedure necessarily encountered in dealing with binary document images is found so strong that most of conventional embedding schemes fail in dealing with watermarking of binary document images. Some particular measures have to be taken. The initial simulation results indicate that the proposed technique is promising though further efforts need to be made

    Research on digital image watermark encryption based on hyperchaos

    Get PDF
    The digital watermarking technique embeds meaningful information into one or more watermark images hidden in one image, in which it is known as a secret carrier. It is difficult for a hacker to extract or remove any hidden watermark from an image, and especially to crack so called digital watermark. The combination of digital watermarking technique and traditional image encryption technique is able to greatly improve anti-hacking capability, which suggests it is a good method for keeping the integrity of the original image. The research works contained in this thesis include: (1)A literature review the hyperchaotic watermarking technique is relatively more advantageous, and becomes the main subject in this programme. (2)The theoretical foundation of watermarking technologies, including the human visual system (HVS), the colour space transform, discrete wavelet transform (DWT), the main watermark embedding algorithms, and the mainstream methods for improving watermark robustness and for evaluating watermark embedding performance. (3) The devised hyperchaotic scrambling technique it has been applied to colour image watermark that helps to improve the image encryption and anti-cracking capabilities. The experiments in this research prove the robustness and some other advantages of the invented technique. This thesis focuses on combining the chaotic scrambling and wavelet watermark embedding to achieve a hyperchaotic digital watermark to encrypt digital products, with the human visual system (HVS) and other factors taken into account. This research is of significant importance and has industrial application value

    Contourlet Domain Image Modeling and its Applications in Watermarking and Denoising

    Get PDF
    Statistical image modeling in sparse domain has recently attracted a great deal of research interest. Contourlet transform as a two-dimensional transform with multiscale and multi-directional properties is known to effectively capture the smooth contours and geometrical structures in images. The objective of this thesis is to study the statistical properties of the contourlet coefficients of images and develop statistically-based image denoising and watermarking schemes. Through an experimental investigation, it is first established that the distributions of the contourlet subband coefficients of natural images are significantly non-Gaussian with heavy-tails and they can be best described by the heavy-tailed statistical distributions, such as the alpha-stable family of distributions. It is shown that the univariate members of this family are capable of accurately fitting the marginal distributions of the empirical data and that the bivariate members can accurately characterize the inter-scale dependencies of the contourlet coefficients of an image. Based on the modeling results, a new method in image denoising in the contourlet domain is proposed. The Bayesian maximum a posteriori and minimum mean absolute error estimators are developed to determine the noise-free contourlet coefficients of grayscale and color images. Extensive experiments are conducted using a wide variety of images from a number of databases to evaluate the performance of the proposed image denoising scheme and to compare it with that of other existing schemes. It is shown that the proposed denoising scheme based on the alpha-stable distributions outperforms these other methods in terms of the peak signal-to-noise ratio and mean structural similarity index, as well as in terms of visual quality of the denoised images. The alpha-stable model is also used in developing new multiplicative watermark schemes for grayscale and color images. Closed-form expressions are derived for the log-likelihood-based multiplicative watermark detection algorithm for grayscale images using the univariate and bivariate Cauchy members of the alpha-stable family. A multiplicative multichannel watermark detector is also designed for color images using the multivariate Cauchy distribution. Simulation results demonstrate not only the effectiveness of the proposed image watermarking schemes in terms of the invisibility of the watermark, but also the superiority of the watermark detectors in providing detection rates higher than that of the state-of-the-art schemes even for the watermarked images undergone various kinds of attacks

    Anti-Collusion Fingerprinting for Multimedia

    Get PDF
    Digital fingerprinting is a technique for identifyingusers who might try to use multimedia content for unintendedpurposes, such as redistribution. These fingerprints are typicallyembedded into the content using watermarking techniques that aredesigned to be robust to a variety of attacks. A cost-effectiveattack against such digital fingerprints is collusion, whereseveral differently marked copies of the same content are combinedto disrupt the underlying fingerprints. In this paper, weinvestigate the problem of designing fingerprints that canwithstand collusion and allow for the identification of colluders.We begin by introducing the collusion problem for additiveembedding. We then study the effect that averaging collusion hasupon orthogonal modulation. We introduce an efficient detectionalgorithm for identifying the fingerprints associated with Kcolluders that requires O(K log(n/K)) correlations for agroup of n users. We next develop a fingerprinting scheme basedupon code modulation that does not require as many basis signalsas orthogonal modulation. We propose a new class of codes, calledanti-collusion codes (ACC), which have the property that thecomposition of any subset of K or fewer codevectors is unique.Using this property, we can therefore identify groups of K orfewer colluders. We present a construction of binary-valued ACCunder the logical AND operation that uses the theory ofcombinatorial designs and is suitable for both the on-off keyingand antipodal form of binary code modulation. In order toaccommodate n users, our code construction requires onlyO(sqrt{n}) orthogonal signals for a given number of colluders.We introduce four different detection strategies that can be usedwith our ACC for identifying a suspect set of colluders. Wedemonstrate the performance of our ACC for fingerprintingmultimedia and identifying colluders through experiments usingGaussian signals and real images.This paper has been submitted to IEEE Transactions on Signal Processing</I

    Steganography and steganalysis: data hiding in Vorbis audio streams

    Get PDF
    The goal of the current work is to introduce ourselves in the world of steganography and steganalysis, centering our efforts in acoustic signals, a branch of steganography and steganalysis which has received much less attention than steganography and steganalysis for images. With this purpose in mind, it’s essential to get first a basic level of understanding of signal theory and the properties of the Human Auditory System, and we will dedicate ourselves to that aim during the first part of this work. Once established those basis, in the second part, we will obtain a precise image of the state of the art in steganographic and steganalytic sciences, from which we will be able to establish or deduce some good practices guides. With both previous subjects in mind, we will be able to create, design and implement a stego-system over Vorbis audio codec and, finally, as conclusion, analyze it using the principles studied during the first and second parts
    • …
    corecore