3 research outputs found
Study and Implementation of Watermarking Algorithms
Water Making is the process of embedding data called a watermark into a multimedia object such that watermark can be detected or extracted later to make an assertion about the object. The object may be an audio, image or video. A copy of a digital image is identical to the original. This has in many instances, led to the use of digital content with malicious intent. One way to protect multimedia data against illegal recording and retransmission is to embed a signal, called digital signature or copyright label or watermark that authenticates the owner of the data. Data hiding, schemes to embed secondary data in digital media, have made considerable progress in recent years and attracted attention from both academia and industry. Techniques have been proposed for a variety of applications, including ownership protection, authentication and access control. Imperceptibility, robustness against moderate processing such as compression, and the ability to hide many bits are the basic but rat..
Digital watermark technology in security applications
With the rising emphasis on security and the number of fraud related crimes
around the world, authorities are looking for new technologies to tighten
security of identity. Among many modern electronic technologies, digital
watermarking has unique advantages to enhance the document authenticity.
At the current status of the development, digital watermarking technologies
are not as matured as other competing technologies to support identity authentication
systems. This work presents improvements in performance of
two classes of digital watermarking techniques and investigates the issue of
watermark synchronisation.
Optimal performance can be obtained if the spreading sequences are designed
to be orthogonal to the cover vector. In this thesis, two classes of
orthogonalisation methods that generate binary sequences quasi-orthogonal
to the cover vector are presented. One method, namely "Sorting and Cancelling"
generates sequences that have a high level of orthogonality to the
cover vector. The Hadamard Matrix based orthogonalisation method, namely
"Hadamard Matrix Search" is able to realise overlapped embedding, thus the
watermarking capacity and image fidelity can be improved compared to using
short watermark sequences. The results are compared with traditional
pseudo-randomly generated binary sequences. The advantages of both classes
of orthogonalisation inethods are significant.
Another watermarking method that is introduced in the thesis is based
on writing-on-dirty-paper theory. The method is presented with biorthogonal
codes that have the best robustness. The advantage and trade-offs of
using biorthogonal codes with this watermark coding methods are analysed
comprehensively. The comparisons between orthogonal and non-orthogonal
codes that are used in this watermarking method are also made. It is found
that fidelity and robustness are contradictory and it is not possible to optimise
them simultaneously.
Comparisons are also made between all proposed methods. The comparisons
are focused on three major performance criteria, fidelity, capacity and
robustness. aom two different viewpoints, conclusions are not the same. For
fidelity-centric viewpoint, the dirty-paper coding methods using biorthogonal
codes has very strong advantage to preserve image fidelity and the advantage
of capacity performance is also significant. However, from the power
ratio point of view, the orthogonalisation methods demonstrate significant
advantage on capacity and robustness. The conclusions are contradictory
but together, they summarise the performance generated by different design
considerations.
The synchronisation of watermark is firstly provided by high contrast
frames around the watermarked image. The edge detection filters are used
to detect the high contrast borders of the captured image. By scanning
the pixels from the border to the centre, the locations of detected edges
are stored. The optimal linear regression algorithm is used to estimate the
watermarked image frames. Estimation of the regression function provides
rotation angle as the slope of the rotated frames. The scaling is corrected by
re-sampling the upright image to the original size. A theoretically studied
method that is able to synchronise captured image to sub-pixel level accuracy
is also presented. By using invariant transforms and the "symmetric
phase only matched filter" the captured image can be corrected accurately
to original geometric size. The method uses repeating watermarks to form an
array in the spatial domain of the watermarked image and the the array that
the locations of its elements can reveal information of rotation, translation
and scaling with two filtering processes
Digital watermarking, information embedding, and data hiding systems
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 139-142).Digital watermarking, information embedding, and data hiding systems embed information, sometimes called a digital watermark, inside a host signal, which is typically an image, audio signal, or video signal. The host signal is not degraded unacceptably in the process, and one can recover the watermark even if the composite host and watermark signal undergo a variety of corruptions and attacks as long as these corruptions do not unacceptably degrade the host signal. These systems play an important role in meeting at least three major challenges that result from the widespread use of digital communication networks to disseminate multimedia content: (1) the relative ease with which one can generate perfect copies of digital signals creates a need for copyright protection mechanisms, (2) the relative ease with which one can alter digital signals creates a need for authentication and tamper-detection methods, and (3) the increase in sheer volume of transmitted data creates a demand for bandwidth-efficient methods to either backwards-compatibly increase capacities of existing legacy networks or deploy new networks backwards-compatibly with legacy networks. We introduce a framework within which to design and analyze digital watermarking and information embedding systems. In this framework performance is characterized by achievable rate-distortion-robustness trade-offs, and this framework leads quite naturally to a new class of embedding methods called quantization index modulation (QIM). These QIM methods, especially when combined with postprocessing called distortion compensation, achieve provably better rate-distortion-robustness performance than previously proposed classes of methods such as spread spectrum methods and generalized low-bit modulation methods in a number of different scenarios, which include both intentional and unintentional attacks. Indeed, we show that distortion-compensated QIM methods can achieve capacity, the information-theoretically best possible rate-distortion-robustness performance, against both additive Gaussian noise attacks and arbitrary squared error distortion-constrained attacks. These results also have implications for the problem of communicating over broadcast channels. We also present practical implementations of QIM methods called dither modulation and demonstrate their performance both analytically and through empirical simulations.by Brian Chen.Ph.D