50 research outputs found
Recommended from our members
Combined robust and fragile watermarking algorithms for still images. Design and evaluation of combined blind discrete wavelet transform-based robust watermarking algorithms for copyright protection using mobile phone numbers and fragile watermarking algorithms for content authentication of digital still images using hash functions.
This thesis deals with copyright protection and content authentication for still images. New blind
transform domain block based algorithms using one-level and two-level Discrete Wavelet Transform
(DWT) were developed for copyright protection. The mobile number with international code is used as
the watermarking data. The robust algorithms used the Low-Low frequency coefficients of the DWT to
embed the watermarking information. The watermarking information is embedded in the green channel of
the RGB colour image and Y channel of the YCbCr images. The watermarking information is scrambled
by using a secret key to increase the security of the algorithms. Due to the small size of the watermarking
information comparing to the host image size, the embedding process is repeated several times which
resulted in increasing the robustness of the algorithms. Shuffling process is implemented during the multi
embedding process in order to avoid spatial correlation between the host image and the watermarking
information. The effects of using one-level and two-level of DWT on the robustness and image quality
have been studied. The Peak Signal to Noise Ratio (PSNR), the Structural Similarity Index Measure
(SSIM) and Normalized Correlation Coefficient (NCC) are used to evaluate the fidelity of the images.
Several grey and still colour images are used to test the new robust algorithms. The new algorithms
offered better results in the robustness against different attacks such as JPEG compression, scaling, salt
and pepper noise, Gaussian noise, filters and other image processing compared to DCT based algorithms.
The authenticity of the images were assessed by using a fragile watermarking algorithm by using hash
function (MD5) as watermarking information embedded in the spatial domain. The new algorithm
showed high sensitivity against any tampering on the watermarked images. The combined fragile and
robust watermarking caused minimal distortion to the images. The combined scheme achieved both the
copyright protection and content authentication
Comparative evaluation of video watermarking techniques in the uncompressed domain
Thesis (MScEng)--Stellenbosch University, 2012.ENGLISH ABSTRACT: Electronic watermarking is a method whereby information can be imperceptibly
embedded into electronic media, while ideally being robust against common signal
manipulations and intentional attacks to remove the embedded watermark. This
study evaluates the characteristics of uncompressed video watermarking techniques
in terms of visual characteristics, computational complexity and robustness against
attacks and signal manipulations.
The foundations of video watermarking are reviewed, followed by a survey of
existing video watermarking techniques. Representative techniques from different
watermarking categories are identified, implemented and evaluated.
Existing image quality metrics are reviewed and extended to improve their performance
when comparing these video watermarking techniques. A new metric for
the evaluation of inter frame flicker in video sequences is then developed.
A technique for possibly improving the robustness of the implemented discrete
Fourier transform technique against rotation is then proposed. It is also shown that
it is possible to reduce the computational complexity of watermarking techniques
without affecting the quality of the original content, through a modified watermark
embedding method.
Possible future studies are then recommended with regards to further improving
watermarking techniques against rotation.AFRIKAANSE OPSOMMING: ’n Elektroniese watermerk is ’n metode waardeur inligting onmerkbaar in elektroniese
media vasgelê kan word, met die doel dat dit bestand is teen algemene manipulasies
en doelbewuste pogings om die watermerk te verwyder. In hierdie navorsing
word die eienskappe van onsaamgeperste video watermerktegnieke ondersoek
in terme van visuele eienskappe, berekeningskompleksiteit en weerstandigheid teen
aanslae en seinmanipulasies.
Die onderbou van video watermerktegnieke word bestudeer, gevolg deur ’n oorsig
van reedsbestaande watermerktegnieke. Verteenwoordigende tegnieke vanuit verskillende
watermerkkategorieë word geïdentifiseer, geïmplementeer en geëvalueer.
Bestaande metodes vir die evaluering van beeldkwaliteite word bestudeer en uitgebrei
om die werkverrigting van die tegnieke te verbeter, spesifiek vir die vergelyking
van watermerktegnieke. ’n Nuwe stelsel vir die evaluering van tussenraampie flikkering
in video’s word ook ontwikkel.
’n Tegniek vir die moontlike verbetering van die geïmplementeerde diskrete Fourier
transform tegniek word voorgestel om die tegniek se bestandheid teen rotasie
te verbeter. Daar word ook aangetoon dat dit moontlik is om die berekeningskompleksiteit
van watermerktegnieke te verminder, sonder om die kwaliteit van die
oorspronklike inhoud te beïnvloed, deur die gebruik van ’n verbeterde watermerkvasleggingsmetode.
Laastens word aanbevelings vir verdere navorsing aangaande die verbetering van
watermerktegnieke teen rotasie gemaak
Recent Advances in Signal Processing
The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
Perceptual Video Quality Assessment and Enhancement
With the rapid development of network visual communication technologies, digital video has become ubiquitous and indispensable in our everyday lives. Video acquisition, communication, and processing systems introduce various types of distortions, which may have major impact on perceived video quality by human observers. Effective and efficient objective video quality assessment (VQA) methods that can predict perceptual video quality are highly desirable in modern visual communication systems for performance evaluation, quality control and resource allocation purposes. Moreover, perceptual VQA measures may also be employed to optimize a wide variety of video processing algorithms and systems for best perceptual quality.
This thesis exploits several novel ideas in the areas of video quality assessment and enhancement. Firstly, by considering a video signal as a 3D volume image, we propose a 3D structural similarity (SSIM) based full-reference (FR) VQA approach, which also incorporates local information content and local distortion-based pooling methods. Secondly, a reduced-reference (RR) VQA scheme is developed by tracing the evolvement of local phase structures over time in the complex wavelet domain. Furthermore, we propose a quality-aware video system which combines spatial and temporal quality measures with a robust video watermarking technique, such that RR-VQA can be performed without transmitting RR features via an ancillary lossless channel. Finally, a novel strategy for enhancing video denoising algorithms, namely poly-view fusion, is developed by examining a video sequence as a 3D volume image from multiple (front, side, top) views. This leads to significant and consistent gain in terms of both peak signal-to-noise ratio (PSNR) and SSIM performance, especially at high noise levels
Establishing the digital chain of evidence in biometric systems
Traditionally, a chain of evidence or chain of custody refers to the chronological documentation, or paper trail, showing the seizure, custody, control, transfer, analysis, and disposition of evidence, physical or electronic. Whether in the criminal justice system, military applications, or natural disasters, ensuring the accuracy and integrity of such chains is of paramount importance. Intentional or unintentional alteration, tampering, or fabrication of digital evidence can lead to undesirable effects. We find despite the consequences at stake, historically, no unique protocol or standardized procedure exists for establishing such chains. Current practices rely on traditional paper trails and handwritten signatures as the foundation of chains of evidence.;Copying, fabricating or deleting electronic data is easier than ever and establishing equivalent digital chains of evidence has become both necessary and desirable. We propose to consider a chain of digital evidence as a multi-component validation problem. It ensures the security of access control, confidentiality, integrity, and non-repudiation of origin. Our framework, includes techniques from cryptography, keystroke analysis, digital watermarking, and hardware source identification. The work offers contributions to many of the fields used in the formation of the framework. Related to biometric watermarking, we provide a means for watermarking iris images without significantly impacting biometric performance. Specific to hardware fingerprinting, we establish the ability to verify the source of an image captured by biometric sensing devices such as fingerprint sensors and iris cameras. Related to keystroke dynamics, we establish that user stimulus familiarity is a driver of classification performance. Finally, example applications of the framework are demonstrated with data collected in crime scene investigations, people screening activities at port of entries, naval maritime interdiction operations, and mass fatality incident disaster responses
Discrete Wavelet Transforms
The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications
Multimedia Forensics
This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field