20 research outputs found
Blind Image Watermarking using Normalized STDM robust against Fixed Gain Attack
International audienceSpread Transform Dither Modulation (STDM), as an extension of Quantization Index Modulation (QIM) is a blind watermarking scheme that achieves high robustness against random noise and re-quantization attacks, with a limitation against the Fixed Gain Attack (FGA). In this paper, we improve the STDM watermarking scheme by making the quantization step size dependent on the watermarked content to resist the FGA attack. Simulations on real images show that our approach achieves strong robustness against the FGA attack, the Additive White Gaussian Noise (AWGN) attack, and the JPEG compression attack while preserving a higher level of transparency
Oblivious data hiding : a practical approach
This dissertation presents an in-depth study of oblivious data hiding with the emphasis on quantization based schemes. Three main issues are specifically addressed:
1. Theoretical and practical aspects of embedder-detector design.
2. Performance evaluation, and analysis of performance vs. complexity tradeoffs.
3. Some application specific implementations.
A communications framework based on channel adaptive encoding and channel independent decoding is proposed and interpreted in terms of oblivious data hiding problem. The duality between the suggested encoding-decoding scheme and practical embedding-detection schemes are examined. With this perspective, a formal treatment of the processing employed in quantization based hiding methods is presented. In accordance with these results, the key aspects of embedder-detector design problem for practical methods are laid out, and various embedding-detection schemes are compared in terms of probability of error, normalized correlation, and hiding rate performance merits assuming AWGN attack scenarios and using mean squared error distortion measure.
The performance-complexity tradeoffs available for large and small embedding signal size (availability of high bandwidth and limitation of low bandwidth) cases are examined and some novel insights are offered. A new codeword generation scheme is proposed to enhance the performance of low-bandwidth applications. Embeddingdetection schemes are devised for watermarking application of data hiding, where robustness against the attacks is the main concern rather than the hiding rate or payload. In particular, cropping-resampling and lossy compression types of noninvertible attacks are considered in this dissertation work
High capacity data embedding schemes for digital media
High capacity image data hiding methods and robust high capacity digital audio watermarking algorithms are studied in this thesis. The main results of this work are the development of novel algorithms with state-of-the-art performance, high capacity and transparency for image data hiding and robustness, high capacity and low distortion for audio watermarking.En esta tesis se estudian y proponen diversos métodos de data hiding de imágenes y watermarking de audio de alta capacidad. Los principales resultados de este trabajo consisten en la publicación de varios algoritmos novedosos con rendimiento a la altura de los mejores métodos del estado del arte, alta capacidad y transparencia, en el caso de data hiding de imágenes, y robustez, alta capacidad y baja distorsión para el watermarking de audio.En aquesta tesi s'estudien i es proposen diversos mètodes de data hiding d'imatges i watermarking d'àudio d'alta capacitat. Els resultats principals d'aquest treball consisteixen en la publicació de diversos algorismes nous amb rendiment a l'alçada dels millors mètodes de l'estat de l'art, alta capacitat i transparència, en el cas de data hiding d'imatges, i robustesa, alta capacitat i baixa distorsió per al watermarking d'àudio.Societat de la informació i el coneixemen
Optimization of a Blind Speech Watermarking Technique against Amplitude Scaling
This paper presents a gain invariant speech watermarking technique based on quantization of the Lp-norm. In this scheme, first, the original speech signal is divided into different frames. Second, each frame is divided into two vectors based on odd and even indices. Third, quantization index modulation (QIM) is used to embed the watermark bits into the ratio of the Lp-norm between the odd and even indices. Finally, the Lagrange optimization technique is applied to minimize the embedding distortion. By applying a statistical analytical approach, the embedding distortion and error probability are estimated. Experimental results not only confirm the accuracy of the driven statistical analytical approach but also prove the robustness of the proposed technique against common signal processing attacks
HDR Image Watermarking
In this Chapter we survey available solutions for HDR image watermarking. First, we briefly discuss watermarking in general terms, with particular emphasis on its requirements that primarily include security, robustness, imperceptibility, capacity and the availability of the original image during recovery. However, with respect to traditional image watermarking, HDR images possess a unique set of features such as an extended range of luminance values to work with and tone-mapping operators against whom it is essential to be robust. These clearly affect the HDR watermarking algorithms proposed in the literature, which we extensively review next, including a thorough analysis of the reported experimental results. As a working example, we also describe the HDR watermarking system that we recently proposed and that focuses on combining imperceptibility, security and robustness to TM operators at the expense of capacity. We conclude the chapter with a critical analysis of the current state and future directions of the watermarking applications in the HDR domain
Sensor Data Integrity Verification for Real-time and Resource Constrained Systems
Sensors are used in multiple applications that touch our lives and have become an integral part of modern life. They are used in building intelligent control systems in various industries like healthcare, transportation, consumer electronics, military, etc. Many mission-critical applications require sensor data to be secure and authentic. Sensor data security can be achieved using traditional solutions like cryptography and digital signatures, but these techniques are computationally intensive and cannot be easily applied to resource constrained systems. Low complexity data hiding techniques, on the contrary, are easy to implement and do not need substantial processing power or memory. In this applied research, we use and configure the established low complexity data hiding techniques from the multimedia forensics domain. These techniques are used to secure the sensor data transmissions in resource constrained and real-time environments such as an autonomous vehicle. We identify the areas in an autonomous vehicle that require sensor data integrity and propose suitable water-marking techniques to verify the integrity of the data and evaluate the performance of the proposed method against different attack vectors. In our proposed method, sensor data is embedded with application specific metadata and this process introduces some distortion. We analyze this embedding induced distortion and its impact on the overall sensor data quality to conclude that watermarking techniques, when properly configured, can solve sensor data integrity verification problems in an autonomous vehicle.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/167387/3/Raghavendar Changalvala Final Dissertation.pdfDescription of Raghavendar Changalvala Final Dissertation.pdf : Dissertatio
Robust digital image watermarking
This research presents a novel rank based image watermarking method and improved moment based and histogram based image watermarking methods. A high-frequency component modification step is also proposed to compensate the side effect of commonly used Gaussian pre-filtering. The proposed methods outperform the latest image watermarking methods
Digital watermark technology in security applications
With the rising emphasis on security and the number of fraud related crimes
around the world, authorities are looking for new technologies to tighten
security of identity. Among many modern electronic technologies, digital
watermarking has unique advantages to enhance the document authenticity.
At the current status of the development, digital watermarking technologies
are not as matured as other competing technologies to support identity authentication
systems. This work presents improvements in performance of
two classes of digital watermarking techniques and investigates the issue of
watermark synchronisation.
Optimal performance can be obtained if the spreading sequences are designed
to be orthogonal to the cover vector. In this thesis, two classes of
orthogonalisation methods that generate binary sequences quasi-orthogonal
to the cover vector are presented. One method, namely "Sorting and Cancelling"
generates sequences that have a high level of orthogonality to the
cover vector. The Hadamard Matrix based orthogonalisation method, namely
"Hadamard Matrix Search" is able to realise overlapped embedding, thus the
watermarking capacity and image fidelity can be improved compared to using
short watermark sequences. The results are compared with traditional
pseudo-randomly generated binary sequences. The advantages of both classes
of orthogonalisation inethods are significant.
Another watermarking method that is introduced in the thesis is based
on writing-on-dirty-paper theory. The method is presented with biorthogonal
codes that have the best robustness. The advantage and trade-offs of
using biorthogonal codes with this watermark coding methods are analysed
comprehensively. The comparisons between orthogonal and non-orthogonal
codes that are used in this watermarking method are also made. It is found
that fidelity and robustness are contradictory and it is not possible to optimise
them simultaneously.
Comparisons are also made between all proposed methods. The comparisons
are focused on three major performance criteria, fidelity, capacity and
robustness. aom two different viewpoints, conclusions are not the same. For
fidelity-centric viewpoint, the dirty-paper coding methods using biorthogonal
codes has very strong advantage to preserve image fidelity and the advantage
of capacity performance is also significant. However, from the power
ratio point of view, the orthogonalisation methods demonstrate significant
advantage on capacity and robustness. The conclusions are contradictory
but together, they summarise the performance generated by different design
considerations.
The synchronisation of watermark is firstly provided by high contrast
frames around the watermarked image. The edge detection filters are used
to detect the high contrast borders of the captured image. By scanning
the pixels from the border to the centre, the locations of detected edges
are stored. The optimal linear regression algorithm is used to estimate the
watermarked image frames. Estimation of the regression function provides
rotation angle as the slope of the rotated frames. The scaling is corrected by
re-sampling the upright image to the original size. A theoretically studied
method that is able to synchronise captured image to sub-pixel level accuracy
is also presented. By using invariant transforms and the "symmetric
phase only matched filter" the captured image can be corrected accurately
to original geometric size. The method uses repeating watermarks to form an
array in the spatial domain of the watermarked image and the the array that
the locations of its elements can reveal information of rotation, translation
and scaling with two filtering processes