13 research outputs found

    Perceptual Quality Assessment for Video Watermarking

    Get PDF
    The reliable evaluation of the performance of watermarking algorithms is difficult. An important aspect in this process is the assessment of the visibility of the watermark. In this paper, we address this issue and propose a methodology for evaluating the visual quality of watermarked video. Using a software tool that measures different types of perceptual video artifacts, we determine the most relevant impairments and design the corresponding objective metrics. We demonstrate their performance through subjective experiments on several different watermarking algorithms and video sequences

    Visual attention-based image watermarking

    Get PDF
    Imperceptibility and robustness are two complementary but fundamental requirements of any watermarking algorithm. Low strength watermarking yields high imperceptibility but exhibits poor robustness. High strength watermarking schemes achieve good robustness but often infuse distortions resulting in poor visual quality in host media. If distortion due to high strength watermarking can avoid visually attentive regions, such distortions are unlikely to be noticeable to any viewer. In this paper, we exploit this concept and propose a novel visual attention-based highly robust image watermarking methodology by embedding lower and higher strength watermarks in visually salient and non-salient regions, respectively. A new low complexity wavelet domain visual attention model is proposed that allows us to design new robust watermarking algorithms. The proposed new saliency model outperforms the state-of-the-art method in joint saliency detection and low computational complexity performances. In evaluating watermarking performances, the proposed blind and non-blind algorithms exhibit increased robustness to various natural image processing and filtering attacks with minimal or no effect on image quality, as verified by both subjective and objective visual quality evaluation. Up to 25% and 40% improvement against JPEG2000 compression and common filtering attacks, respectively, are reported against the existing algorithms that do not use a visual attention model

    Audio Signal Processing Using Time-Frequency Approaches: Coding, Classification, Fingerprinting, and Watermarking

    Get PDF
    Audio signals are information rich nonstationary signals that play an important role in our day-to-day communication, perception of environment, and entertainment. Due to its non-stationary nature, time- or frequency-only approaches are inadequate in analyzing these signals. A joint time-frequency (TF) approach would be a better choice to efficiently process these signals. In this digital era, compression, intelligent indexing for content-based retrieval, classification, and protection of digital audio content are few of the areas that encapsulate a majority of the audio signal processing applications. In this paper, we present a comprehensive array of TF methodologies that successfully address applications in all of the above mentioned areas. A TF-based audio coding scheme with novel psychoacoustics model, music classification, audio classification of environmental sounds, audio fingerprinting, and audio watermarking will be presented to demonstrate the advantages of using time-frequency approaches in analyzing and extracting information from audio signals.</p

    Automatic evaluation of watermarking schemes

    Get PDF
    Many watermarking schemes are now well defined, but it is still very difficult to compare them and thus find the one which fits our needs. Since both media and attacks used for evaluation are different in each article, it is almost impossible to compare the schemes.In this article, we introduce StirMark Benchmark 4, a new automatic tool to evaluate watermarking schemes. It is written in C++, according to an object oriented model, which allows us towork on images and audio files.There are many different applications for watermarking, so we use profiles to define tests to apply according to the requested parameters of the method, and its purposes. We also propose different levels of quality on usual criteria (perceptibility, robustness and capacity) to increase the legibility of the performances obtained by the schemes. We also introduce new tests (audio, key space, falsealarms, multiple watermarking).Les méthodes de tatouage sont de plus en plus nombreuses. Néanmoins, il est difficile de les comparer et de trouver celle adaptée à ses besoins dans la mesure où les tests présentés sont très souvent différents. En effet, tant les média employés que les transformations qu'ils subissent changent d'une étude à l'autre. Dans cet article, nous présentons StirMark Benchmark 4, un outil d'évaluation automatique pour les schémas de tatouage. Il est développé en C++, selon un modèle orienté objets, ce qui nous a permis de l'adapter à la fois aux images et aux fichiers audios. Les algorithmes de tatouage étant tous dissemblables, nous utilisons des profils qui définissent les tests à appliquer aux méthodes, selon les paramètres dont elles se servent et les objectifs poursuivis. Nous proposons également des niveaux d'assurance sur les critères habituels (perceptibilité, robustesse et capacité) afin de faciliter la lisibilité des performances obtenues par les schémas. Nous présentons aussi de nouveaux tests (audio, espace des-clés, fausses alarmes, marquage multiple)

    Establishing the digital chain of evidence in biometric systems

    Get PDF
    Traditionally, a chain of evidence or chain of custody refers to the chronological documentation, or paper trail, showing the seizure, custody, control, transfer, analysis, and disposition of evidence, physical or electronic. Whether in the criminal justice system, military applications, or natural disasters, ensuring the accuracy and integrity of such chains is of paramount importance. Intentional or unintentional alteration, tampering, or fabrication of digital evidence can lead to undesirable effects. We find despite the consequences at stake, historically, no unique protocol or standardized procedure exists for establishing such chains. Current practices rely on traditional paper trails and handwritten signatures as the foundation of chains of evidence.;Copying, fabricating or deleting electronic data is easier than ever and establishing equivalent digital chains of evidence has become both necessary and desirable. We propose to consider a chain of digital evidence as a multi-component validation problem. It ensures the security of access control, confidentiality, integrity, and non-repudiation of origin. Our framework, includes techniques from cryptography, keystroke analysis, digital watermarking, and hardware source identification. The work offers contributions to many of the fields used in the formation of the framework. Related to biometric watermarking, we provide a means for watermarking iris images without significantly impacting biometric performance. Specific to hardware fingerprinting, we establish the ability to verify the source of an image captured by biometric sensing devices such as fingerprint sensors and iris cameras. Related to keystroke dynamics, we establish that user stimulus familiarity is a driver of classification performance. Finally, example applications of the framework are demonstrated with data collected in crime scene investigations, people screening activities at port of entries, naval maritime interdiction operations, and mass fatality incident disaster responses
    corecore