8 research outputs found

    Robust image watermarking using Turbo TCQ

    Get PDF
    Robust watermarking is the art of embedding secret data within an host document. This watermark must be as transparent as possible, in order to preserve fidelity between host and marked document. It must also be robust, i.e. even if marked document is attacked – this concerns both usual multimedia transforms and malicious modifications – it should be possible to read the embedded message. A watermarking scheme is a compromise between transparency, robustness and capacity, i.e. embedded message length. It is widely admitted that watermarking is a communication problem. Thus, recent literacy dealt with digital communications tools to improve watermarking’s compromise. This leaded to the re-discovery of channels with side information available at the encoder (see Fig. 2), and of the corresponding seminal paper by Costa[1]. The author demonstrated how to dramatically improve channel capacity. Unfortunately, his demonstration is impossible to implement in practice...Cet article se concentre sur la mise en pratique du schéma idéal de Costa pour la problématique du tatouage de contenus multimédia. Après un rappel de la théorie, nous faisons une utilisation détournée de techniques de quantification pour construire un code correcteur adapté aux canaux avec information adjacente. La suite de l’article est consacrée à l’application de ce code pour le tatouage robuste d’images en niveaux de gris. Les résultats des expérimentations montrent des performances encourageantes, au niveau des papiers de référence du domaine, avec une mise en oeuvre simple et efficace

    Study and Implementation of Watermarking Algorithms

    Get PDF
    Water Making is the process of embedding data called a watermark into a multimedia object such that watermark can be detected or extracted later to make an assertion about the object. The object may be an audio, image or video. A copy of a digital image is identical to the original. This has in many instances, led to the use of digital content with malicious intent. One way to protect multimedia data against illegal recording and retransmission is to embed a signal, called digital signature or copyright label or watermark that authenticates the owner of the data. Data hiding, schemes to embed secondary data in digital media, have made considerable progress in recent years and attracted attention from both academia and industry. Techniques have been proposed for a variety of applications, including ownership protection, authentication and access control. Imperceptibility, robustness against moderate processing such as compression, and the ability to hide many bits are the basic but rat..

    Internet of Things data contextualisation for scalable information processing, security, and privacy

    Get PDF
    The Internet of Things (IoT) interconnects billions of sensors and other devices (i.e., things) via the internet, enabling novel services and products that are becoming increasingly important for industry, government, education and society in general. It is estimated that by 2025, the number of IoT devices will exceed 50 billion, which is seven times the estimated human population at that time. With such a tremendous increase in the number of IoT devices, the data they generate is also increasing exponentially and needs to be analysed and secured more efficiently. This gives rise to what is appearing to be the most significant challenge for the IoT: Novel, scalable solutions are required to analyse and secure the extraordinary amount of data generated by tens of billions of IoT devices. Currently, no solutions exist in the literature that provide scalable and secure IoT scale data processing. In this thesis, a novel scalable approach is proposed for processing and securing IoT scale data, which we refer to as contextualisation. The contextualisation solution aims to exclude irrelevant IoT data from processing and address data analysis and security considerations via the use of contextual information. More specifically, contextualisation can effectively reduce the volume, velocity and variety of data that needs to be processed and secured in IoT applications. This contextualisation-based data reduction can subsequently provide IoT applications with the scalability needed for IoT scale knowledge extraction and information security. IoT scale applications, such as smart parking or smart healthcare systems, can benefit from the proposed method, which  improves the scalability of data processing as well as the security and privacy of data.   The main contributions of this thesis are: 1) An introduction to context and contextualisation for IoT applications; 2) a contextualisation methodology for IoT-based applications that is modelled around observation, orientation, decision and action loops; 3) a collection of contextualisation techniques and a corresponding software platform for IoT data processing (referred to as contextualisation-as-a-service or ConTaaS) that enables highly scalable data analysis, security and privacy solutions; and 4) an evaluation of ConTaaS in several IoT applications to demonstrate that our contextualisation techniques permit data analysis, security and privacy solutions to remain linear, even in situations where the number of IoT data points increases exponentially

    Robust digital image watermarking algorithms for copyright protection

    Get PDF
    Digital watermarking has been proposed as a solution to the problem of resolving copyright ownership of multimedia data (image, audio, video). The work presented in this thesis is concerned with the design of robust digital image watermarking algorithms for copyright protection. Firstly, an overview of the watermarking system, applications of watermarks as well as the survey of current watermarking algorithms and attacks, are given. Further, the implementation of feature point detectors in the field of watermarking is introduced. A new class of scale invariant feature point detectors is investigated and it is showed that they have excellent performances required for watermarking. The robustness of the watermark on geometrical distortions is very important issue in watermarking. In order to detect the parameters of undergone affine transformation, we propose an image registration technique which is based on use of the scale invariant feature point detector. Another proposed technique for watermark synchronization is also based on use of scale invariant feature point detector. This technique does not use the original image to determine the parameters of affine transformation which include rotation and scaling. It is experimentally confirmed that this technique gives excellent results under tested geometrical distortions. In the thesis, two different watermarking algorithms are proposed in the wavelet domain. The first algorithm belongs to the class of additive watermarking algorithms which requires the presence of original image for watermark detection. Using this algorithm the influence of different error correction codes on the watermark robustness is investigated. The second algorithm does not require the original image for watermark detection. The robustness of this algorithm is tested on various filtering and compression attacks. This algorithm is successfully combined with the aforementioned synchronization technique in order to achieve the robustness on geometrical attacks. The last watermarking algorithm presented in the thesis is developed in complex wavelet domain. The complex wavelet transform is described and its advantages over the conventional discrete wavelet transform are highlighted. The robustness of the proposed algorithm was tested on different class of attacks. Finally, in the thesis the conclusion is given and the main future research directions are suggested

    Contribution des filtres LPTV et des techniques d'interpolation au tatouage numérique

    Get PDF
    Les Changements d'Horloge Périodiques (PCC) et les filtres Linéaires Variant Périodiquement dans le Temps (LPTV) sont utilisés dans le domaine des télécommunications multi-utilisateurs. Dans cette thèse, nous montrons que, dans l'ensemble des techniques de tatouage par étalement de spectre, ils peuvent se substituer à la modulation par code pseudo-aléatoire. Les modules de décodage optimal, de resynchronisation, de pré-annulation des interférences et de quantification de la transformée d'étalement s'appliquent également aux PCC et aux filtres LPTV. Pour le modèle de signaux stationnaires blancs gaussiens, ces techniques présentent des performances identiques à l'étalement à Séquence Directe (DS) classique. Cependant, nous montrons que, dans le cas d'un signal corrélé localement, la luminance d'une image naturelle notamment, la périodicité des PCC et des filtres LPTV associée à un parcours d'image de type Peano-Hilbert conduit à de meilleures performances. Les filtres LPTV sont en outre un outil plus puissant qu'une simple modulation DS. Nous les utilisons pour effectuer un masquage spectral simultanément à l'étalement, ainsi qu'un rejet des interférences de l'image dans le domaine spectral. Cette dernière technique possède de très bonnes performances au décodage. Le second axe de cette thèse est l'étude des liens entre interpolation et tatouage numérique. Nous soulignons d'abord le rôle de l'interpolation dans les attaques sur la robustesse du tatouage. Nous construisons ensuite des techniques de tatouage bénéficiant des propriétés perceptuelles de l'interpolation. La première consiste en des masques perceptuels utilisant le bruit d'interpolation. Dans la seconde, un schéma de tatouage informé est construit autour de l'interpolation. Cet algorithme, qu'on peut relier aux techniques de catégorisation aléatoire, utilise des règles d'insertion et de décodage originales, incluant un masquage perceptuel intrinsèque. Outre ces bonnes propriétés perceptuelles, il présente un rejet des interférences de l'hôte et une robustesse à diverses attaques telles que les transformations valumétriques. Son niveau de sécurité est évalué à l'aide d'algorithmes d'attaque pratiques. ABSTRACT : Periodic Clock Changes (PCC) and Linear Periodically Time Varying (LPTV) filters have previously been applied to multi-user telecommunications in the Signal and Communications group of IRIT laboratory. In this thesis, we show that in each digital watermarking scheme involving spread-spectrum, they can be substituted to modulation by a pseudo-noise. The additional steps of optimal decoding, resynchronization, pre-cancellation of interference and quantization of a spread transform apply also to PCCs and LPTV filters. For white Gaussian stationary signals, these techniques offer similar performance as classical Direct Sequence (DS) spreading. However we show that, in the case of locally correlated signals such as image luminance, the periodicity of PCCs and LPTV filters associated to a Peano-Hilbert scan leads to better performance. Moreover, LPTV filters are a more powerful tool than simple DS modulation. We use LPTV filters to conduct spectrum masking simultaneous to spreading, as well as image interference cancellation in the spectral domain. The latter technique offers good decoding performance. The second axis of this thesis is the study of the links between interpolation and digital watermarking.We stress the role of interpolation in attacks on the watermark.We propose then watermarking techniques that benefit from interpolation perceptual properties. The first technique consists in constructing perceptualmasks proportional to an interpolation error. In the second technique, an informed watermarking scheme derives form interpolation. This scheme exhibits good perceptual properties, host-interference rejection and robustness to various attacks such as valumetric transforms. Its security level is assessed by ad hoc practical attack algorithms
    corecore