14 research outputs found

    Watermarking for multimedia security using complex wavelets

    Get PDF
    This paper investigates the application of complex wavelet transforms to the field of digital data hiding. Complex wavelets offer improved directional selectivity and shift invariance over their discretely sampled counterparts allowing for better adaptation of watermark distortions to the host media. Two methods of deriving visual models for the watermarking system are adapted to the complex wavelet transforms and their performances are compared. To produce improved capacity a spread transform embedding algorithm is devised, this combines the robustness of spread spectrum methods with the high capacity of quantization based methods. Using established information theoretic methods, limits of watermark capacity are derived that demonstrate the superiority of complex wavelets over discretely sampled wavelets. Finally results for the algorithm against commonly used attacks demonstrate its robustness and the improved performance offered by complex wavelet transforms

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    High-fidelity imaging : the computational models of the human visual system in high dynamic range video compression, visible difference prediction and image processing

    Get PDF
    As new displays and cameras offer enhanced color capabilities, there is a need to extend the precision of digital content. High Dynamic Range (HDR) imaging encodes images and video with higher than normal bit-depth precision, enabling representation of the complete color gamut and the full visible range of luminance. This thesis addresses three problems of HDR imaging: the measurement of visible distortions in HDR images, lossy compression for HDR video, and artifact-free image processing. To measure distortions in HDR images, we develop a visual difference predictor for HDR images that is based on a computational model of the human visual system. To address the problem of HDR image encoding and compression, we derive a perceptually motivated color space for HDR pixels that can efficiently encode all perceivable colors and distinguishable shades of brightness. We use the derived color space to extend the MPEG-4 video compression standard for encoding HDR movie sequences. We also propose a backward-compatible HDR MPEG compression algorithm that encodes both a low-dynamic range and an HDR video sequence into a single MPEG stream. Finally, we propose a framework for image processing in the contrast domain. The framework transforms an image into multi-resolution physical contrast images (maps), which are then rescaled in just-noticeable-difference (JND) units. The application of the framework is demonstrated with a contrast-enhancing tone mapping and a color to gray conversion that preserves color saliency.Aktuelle Innovationen in der Farbverarbeitung bei Bildschirmen und Kameras erzwingen eine PrĂ€zisionserweiterung bei digitalen Medien. High Dynamic Range (HDR) kodieren Bilder und Video mit einer grösseren Bittiefe pro Pixel, und ermöglichen damit die Darstellung des kompletten Farbraums und aller sichtbaren Helligkeitswerte. Diese Arbeit konzentriert sich auf drei Probleme in der HDR-Verarbeitung: Messung von fĂŒr den Menschen störenden Fehlern in HDR-Bildern, verlustbehaftete Kompression von HDR-Video, und visuell verlustfreie HDR-Bildverarbeitung. Die Messung von HDR-Bildfehlern geschieht mittels einer Vorhersage von sichtbaren Unterschieden zweier HDR-Bilder. Die Vorhersage basiert dabei auf einer Modellierung der menschlichen Sehens. Wir addressieren die Kompression und Kodierung von HDR-Bildern mit der Ableitung eines perzeptuellen Farbraums fĂŒr HDR-Pixel, der alle wahrnehmbaren Farben und deren unterscheidbaren Helligkeitsnuancen effizient abbildet. Danach verwenden wir diesen Farbraum fĂŒr die Erweiterung des MPEG-4 Videokompressionsstandards, welcher sich hinfort auch fĂŒr die Kodierung von HDR-Videosequenzen eignet. Wir unterbreiten weiters eine rĂŒckwĂ€rts-kompatible MPEG-Kompression von HDR-Material, welche die ĂŒbliche YUV-Bildsequenz zusammen mit dessen HDRVersion in einen gemeinsamen MPEG-Strom bettet. Abschliessend erklĂ€ren wir unser Framework zur Bildverarbeitung in der KontrastdomĂ€ne. Das Framework transformiert Bilder in mehrere physikalische Kontrastauflösungen, um sie danach in Einheiten von just-noticeable-difference (JND, noch erkennbarem Unterschied) zu reskalieren. Wir demonstrieren den Nutzen dieses Frameworks anhand von einem kontrastverstĂ€rkenden Tone Mapping-Verfahren und einer Graukonvertierung, die die urspr ĂŒnglichen Farbkontraste bestmöglich beibehĂ€lt

    Data hiding in images based on fractal modulation and diversity combining

    Get PDF
    The current work provides a new data-embedding infrastructure based on fractal modulation. The embedding problem is tackled from a communications point of view. The data to be embedded becomes the signal to be transmitted through a watermark channel. The channel could be the image itself or some manipulation of the image. The image self noise and noise due to attacks are the two sources of noise in this paradigm. At the receiver, the image self noise has to be suppressed, while noise due to the attacks may sometimes be predicted and inverted. The concepts of fractal modulation and deterministic self-similar signals are extended to 2-dimensional images. These novel techniques are used to build a deterministic bi-homogenous watermark signal that embodies the binary data to be embedded. The binary data to be embedded, is repeated and scaled with different amplitudes at each level and is used as the wavelet decomposition pyramid. The binary data is appended with special marking data, which is used during demodulation, to identify and correct unreliable or distorted blocks of wavelet coefficients. This specially constructed pyramid is inverted using the inverse discrete wavelet transform to obtain the self-similar watermark signal. In the data embedding stage, the well-established linear additive technique is used to add the watermark signal to the cover image, to generate the watermarked (stego) image. Data extraction from a potential stego image is done using diversity combining. Neither the original image nor the original binary sequence (or watermark signal) is required during the extraction. A prediction of the original image is obtained using a cross-shaped window and is used to suppress the image self noise in the potential stego image. The resulting signal is then decomposed using the discrete wavelet transform. The number of levels and the wavelet used are the same as those used in the watermark signal generation stage. A thresholding process similar to wavelet de-noising is used to identify whether a particular coefficient is reliable or not. A decision is made as to whether a block is reliable or not based on the marking data present in each block and sometimes corrections are applied to the blocks. Finally the selected blocks are combined based on the diversity combining strategy to extract the embedded binary data

    Contribution des filtres LPTV et des techniques d'interpolation au tatouage numérique

    Get PDF
    Les Changements d'Horloge Périodiques (PCC) et les filtres Linéaires Variant Périodiquement dans le Temps (LPTV) sont utilisés dans le domaine des télécommunications multi-utilisateurs. Dans cette thÚse, nous montrons que, dans l'ensemble des techniques de tatouage par étalement de spectre, ils peuvent se substituer à la modulation par code pseudo-aléatoire. Les modules de décodage optimal, de resynchronisation, de pré-annulation des interférences et de quantification de la transformée d'étalement s'appliquent également aux PCC et aux filtres LPTV. Pour le modÚle de signaux stationnaires blancs gaussiens, ces techniques présentent des performances identiques à l'étalement à Séquence Directe (DS) classique. Cependant, nous montrons que, dans le cas d'un signal corrélé localement, la luminance d'une image naturelle notamment, la périodicité des PCC et des filtres LPTV associée à un parcours d'image de type Peano-Hilbert conduit à de meilleures performances. Les filtres LPTV sont en outre un outil plus puissant qu'une simple modulation DS. Nous les utilisons pour effectuer un masquage spectral simultanément à l'étalement, ainsi qu'un rejet des interférences de l'image dans le domaine spectral. Cette derniÚre technique possÚde de trÚs bonnes performances au décodage. Le second axe de cette thÚse est l'étude des liens entre interpolation et tatouage numérique. Nous soulignons d'abord le rÎle de l'interpolation dans les attaques sur la robustesse du tatouage. Nous construisons ensuite des techniques de tatouage bénéficiant des propriétés perceptuelles de l'interpolation. La premiÚre consiste en des masques perceptuels utilisant le bruit d'interpolation. Dans la seconde, un schéma de tatouage informé est construit autour de l'interpolation. Cet algorithme, qu'on peut relier aux techniques de catégorisation aléatoire, utilise des rÚgles d'insertion et de décodage originales, incluant un masquage perceptuel intrinsÚque. Outre ces bonnes propriétés perceptuelles, il présente un rejet des interférences de l'hÎte et une robustesse à diverses attaques telles que les transformations valumétriques. Son niveau de sécurité est évalué à l'aide d'algorithmes d'attaque pratiques. ABSTRACT : Periodic Clock Changes (PCC) and Linear Periodically Time Varying (LPTV) filters have previously been applied to multi-user telecommunications in the Signal and Communications group of IRIT laboratory. In this thesis, we show that in each digital watermarking scheme involving spread-spectrum, they can be substituted to modulation by a pseudo-noise. The additional steps of optimal decoding, resynchronization, pre-cancellation of interference and quantization of a spread transform apply also to PCCs and LPTV filters. For white Gaussian stationary signals, these techniques offer similar performance as classical Direct Sequence (DS) spreading. However we show that, in the case of locally correlated signals such as image luminance, the periodicity of PCCs and LPTV filters associated to a Peano-Hilbert scan leads to better performance. Moreover, LPTV filters are a more powerful tool than simple DS modulation. We use LPTV filters to conduct spectrum masking simultaneous to spreading, as well as image interference cancellation in the spectral domain. The latter technique offers good decoding performance. The second axis of this thesis is the study of the links between interpolation and digital watermarking.We stress the role of interpolation in attacks on the watermark.We propose then watermarking techniques that benefit from interpolation perceptual properties. The first technique consists in constructing perceptualmasks proportional to an interpolation error. In the second technique, an informed watermarking scheme derives form interpolation. This scheme exhibits good perceptual properties, host-interference rejection and robustness to various attacks such as valumetric transforms. Its security level is assessed by ad hoc practical attack algorithms

    Digital watermark technology in security applications

    Get PDF
    With the rising emphasis on security and the number of fraud related crimes around the world, authorities are looking for new technologies to tighten security of identity. Among many modern electronic technologies, digital watermarking has unique advantages to enhance the document authenticity. At the current status of the development, digital watermarking technologies are not as matured as other competing technologies to support identity authentication systems. This work presents improvements in performance of two classes of digital watermarking techniques and investigates the issue of watermark synchronisation. Optimal performance can be obtained if the spreading sequences are designed to be orthogonal to the cover vector. In this thesis, two classes of orthogonalisation methods that generate binary sequences quasi-orthogonal to the cover vector are presented. One method, namely "Sorting and Cancelling" generates sequences that have a high level of orthogonality to the cover vector. The Hadamard Matrix based orthogonalisation method, namely "Hadamard Matrix Search" is able to realise overlapped embedding, thus the watermarking capacity and image fidelity can be improved compared to using short watermark sequences. The results are compared with traditional pseudo-randomly generated binary sequences. The advantages of both classes of orthogonalisation inethods are significant. Another watermarking method that is introduced in the thesis is based on writing-on-dirty-paper theory. The method is presented with biorthogonal codes that have the best robustness. The advantage and trade-offs of using biorthogonal codes with this watermark coding methods are analysed comprehensively. The comparisons between orthogonal and non-orthogonal codes that are used in this watermarking method are also made. It is found that fidelity and robustness are contradictory and it is not possible to optimise them simultaneously. Comparisons are also made between all proposed methods. The comparisons are focused on three major performance criteria, fidelity, capacity and robustness. aom two different viewpoints, conclusions are not the same. For fidelity-centric viewpoint, the dirty-paper coding methods using biorthogonal codes has very strong advantage to preserve image fidelity and the advantage of capacity performance is also significant. However, from the power ratio point of view, the orthogonalisation methods demonstrate significant advantage on capacity and robustness. The conclusions are contradictory but together, they summarise the performance generated by different design considerations. The synchronisation of watermark is firstly provided by high contrast frames around the watermarked image. The edge detection filters are used to detect the high contrast borders of the captured image. By scanning the pixels from the border to the centre, the locations of detected edges are stored. The optimal linear regression algorithm is used to estimate the watermarked image frames. Estimation of the regression function provides rotation angle as the slope of the rotated frames. The scaling is corrected by re-sampling the upright image to the original size. A theoretically studied method that is able to synchronise captured image to sub-pixel level accuracy is also presented. By using invariant transforms and the "symmetric phase only matched filter" the captured image can be corrected accurately to original geometric size. The method uses repeating watermarks to form an array in the spatial domain of the watermarked image and the the array that the locations of its elements can reveal information of rotation, translation and scaling with two filtering processes

    DCT-Based Image Feature Extraction and Its Application in Image Self-Recovery and Image Watermarking

    Get PDF
    Feature extraction is a critical element in the design of image self-recovery and watermarking algorithms and its quality can have a big influence on the performance of these processes. The objective of the work presented in this thesis is to develop an effective methodology for feature extraction in the discrete cosine transform (DCT) domain and apply it in the design of adaptive image self-recovery and image watermarking algorithms. The methodology is to use the most significant DCT coefficients that can be at any frequency range to detect and to classify gray level patterns. In this way, gray level variations with a wider range of spatial frequencies can be looked into without increasing computational complexity and the methodology is able to distinguish gray level patterns rather than the orientations of simple edges only as in many existing DCT-based methods. The proposed image self-recovery algorithm uses the developed feature extraction methodology to detect and classify blocks that contain significant gray level variations. According to the profile of each block, the critical frequency components representing the specific gray level pattern of the block are chosen for encoding. The code lengths are made variable depending on the importance of these components in defining the block’s features, which makes the encoding of critical frequency components more precise, while keeping the total length of the reference code short. The proposed image self-recovery algorithm has resulted in remarkably shorter reference codes that are only 1/5 to 3/5 of those produced by existing methods, and consequently a superior visual quality in the embedded images. As the shorter codes contain the critical image information, the proposed algorithm has also achieved above average reconstruction quality for various tampering rates. The proposed image watermarking algorithm is computationally simple and designed for the blind extraction of the watermark. The principle of the algorithm is to embed the watermark in the locations where image data alterations are the least visible. To this end, the properties of the HVS are used to identify the gray level image features of such locations. The characteristics of the frequency components representing these features are identifying by applying the DCT-based feature extraction methodology developed in this thesis. The strength with which the watermark is embedded is made adaptive to the local gray level characteristics. Simulation results have shown that the proposed watermarking algorithm results in significantly higher visual quality in the watermarked images than that of the reported methods with a difference in PSNR of about 2.7 dB, while the embedded watermark is highly robustness against JPEG compression even at low quality factors and to some other common image processes. The good performance of the proposed image self-recovery and watermarking algorithms is an indication of the effectiveness of the developed feature extraction methodology. This methodology can be applied in a wide range of applications and it is suitable for any process where the DCT data is available

    Tatouage robuste d’images imprimĂ©es

    Get PDF
    Invisible watermarking for ID images printed on plastic card support is a challenging problem that interests the industrial world. In this study, we developed a watermarking algorithm robust to various attacks present in this case. These attacks are mainly related to the print/scan process on the plastic support and the degradations that an ID card can encounter along its lifetime. The watermarking scheme operates in the Fourier domain as this transform has invariance properties against global geometrical transformations. A preventive method consists of pre-processing the host image before the embedding process that reduces the variance of the embeddable vector. A curative method comprises two counterattacks dealing with blurring and color variations. For a false alarm probability of 10⁻⁎, we obtained an average improvement of 22% over the reference method when only preventative method is used. The combination of the preventive and curative methods leads to a detection rate greater than 99%. The detection algorithm takes less than 1 second for a 512×512 image with a conventional computer, which is compatible with the industrial application in question.Le tatouage invisible d’images d’identitĂ© imprimĂ©es sur un support en plastique est un problĂšme difficile qui intĂ©resse le monde industriel. Dans cette Ă©tude, nous avons dĂ©veloppĂ© un algorithme de tatouage robuste aux diverses attaques prĂ©sentes dans ce cas. Ces attaques sont liĂ©es aux processus d’impression/numĂ©risation sur le support plastique ainsi qu’aux dĂ©gradations qu’une carte plastique peut rencontrer le long de sa durĂ©e de vie. La mĂ©thode de tatouage opĂšre dans le domaine de Fourier car cette transformĂ©e prĂ©sente des propriĂ©tĂ©s d’invariances aux attaques gĂ©omĂ©triques globales. Une mĂ©thode prĂ©ventive consiste en un prĂ©traitement de l’image originale avant le processus d’insertion qui rĂ©duit la variance du vecteur support de la marque. Une mĂ©thode corrective comporte deux contre-attaques corrigeant le flou et les variations colorimĂ©triques. Pour une probabilitĂ© de fausse alarme de 10⁻⁎, nous avons obtenu une amĂ©lioration moyenne de 22% par rapport Ă  la mĂ©thode de rĂ©fĂ©rence lorsque seule la mĂ©thode prĂ©ventive est utilisĂ©e. La combinaison de la mĂ©thode prĂ©ventive avec la mĂ©thode corrective correspond Ă  un taux de dĂ©tection supĂ©rieur Ă  99%. L’algorithme de dĂ©tection prends moins de 1 seconde pour Ă  une image de 512×512 pixels avec un ordinateur classique ce qui est compatible avec l’application industrielle visĂ©e

    Synchronized Illumination Modulation for Digital Video Compositing

    Get PDF
    Informationsaustausch ist eines der GrundbedĂŒrfnisse der Menschen. WĂ€hrend frĂŒher dazu Wandmalereien,Handschrift, Buchdruck und Malerei eingesetzt wurden, begann man spĂ€ter, Bildfolgen zu erstellen, die als sogenanntes ”Daumenkino” den Eindruck einer Animation vermitteln. Diese wurden schnell durch den Einsatz rotierender Bildscheiben, auf denen mit Hilfe von Schlitzblenden, Spiegeln oder Optiken eine Animation sichtbar wurde, automatisiert – mit sogenannten Phenakistiskopen,Zoetropen oder Praxinoskopen. Mit der Erfindung der Fotografie begannen in der zweiten HĂ€lfte des 19. Jahrhunderts die ersten Wissenschaftler wie Eadweard Muybridge, Etienne-Jules Marey und Ottomar AnschĂŒtz, Serienbildaufnahmen zu erstellen und diese dann in schneller Abfolge, als Film, abzuspielen. Mit dem Beginn der Filmproduktion wurden auch die ersten Versuche unternommen, mit Hilfe dieser neuen Technik spezielle visuelle Effekte zu generieren, um damit die Immersion der Bewegtbildproduktionen weiter zu erhöhen. WĂ€hrend diese Effekte in der analogen Phase der Filmproduktion bis in die achtziger Jahre des 20.Jahrhunderts recht beschrĂ€nkt und sehr aufwendig mit einem enormen manuellen Arbeitsaufwand erzeugt werden mussten, gewannen sie mit der sich rapide beschleunigenden Entwicklung der Halbleitertechnologie und der daraus resultierenden vereinfachten digitalen Bearbeitung immer mehr an Bedeutung. Die enormen Möglichkeiten, die mit der verlustlosen Nachbearbeitung in Kombination mit fotorealistischen, dreidimensionalen Renderings entstanden, fĂŒhrten dazu, dass nahezu alle heute produzierten Filme eine Vielfalt an digitalen Videokompositionseffekten beinhalten. ...Besides home entertainment and business presentations, video projectors are powerful tools for modulating images spatially as well as temporally. The re-evolving need for stereoscopic displays increases the demand for low-latency projectors and recent advances in LED technology also offer high modulation frequencies. Combining such high-frequency illumination modules with synchronized, fast cameras, makes it possible to develop specialized high-speed illumination systems for visual effects production. In this thesis we present different systems for using spatially as well as temporally modulated illumination in combination with a synchronized camera to simplify the requirements of standard digital video composition techniques for film and television productions and to offer new possibilities for visual effects generation. After an overview of the basic terminology and a summary of related methods, we discuss and give examples of how modulated light can be applied to a scene recording context to enable a variety of effects which cannot be realized using standard methods, such as virtual studio technology or chroma keying. We propose using high-frequency, synchronized illumination which, in addition to providing illumination, is modulated in terms of intensity and wavelength to encode technical information for visual effects generation. This is carried out in such a way that the technical components do not influence the final composite and are also not visible to observers on the film set. Using this approach we present a real-time flash keying system for the generation of perspectively correct augmented composites by projecting imperceptible markers for optical camera tracking. Furthermore, we present a system which enables the generation of various digital video compositing effects outside of completely controlled studio environments, such as virtual studios. A third temporal keying system is presented that aims to overcome the constraints of traditional chroma keying in terms of color spill and color dependency. ..
    corecore