26 research outputs found

    Robust Watermarking in Multiresolution Walsh-Hadamard Transform

    Full text link
    In this paper, a newer version of Walsh-Hadamard Transform namely multiresolution Walsh-Hadamard Transform (MR-WHT) is proposed for images. Further, a robust watermarking scheme is proposed for copyright protection using MRWHT and singular value decomposition. The core idea of the proposed scheme is to decompose an image using MR-WHT and then middle singular values of high frequency sub-band at the coarsest and the finest level are modified with the singular values of the watermark. Finally, a reliable watermark extraction scheme is developed for the extraction of the watermark from the distorted image. The experimental results show better visual imperceptibility and resiliency of the proposed scheme against intentional or un-intentional variety of attacks.Comment: 6 Pages, 16 Figure, 2 Table

    A Study on Invisible Digital Image and Video Watermarking Techniques

    Get PDF
    Digital watermarking was introduced as a result of rapid advancement of networked multimedia systems. It had been developed to enforce copyright technologies for cover of copyright possession. This technology is first used for still images however recently they need been developed for different multimedia objects like audio, video etc. Watermarking, that belong to the information hiding field, has seen plenty of research interest. There's a lot of work begin conducted in numerous branches in this field. The image watermarking techniques might divide on the idea of domain like spatial domain or transform domain or on the basis of wavelets. The copyright protection, capacity, security, strength etc are a number of the necessary factors that are taken in account whereas the watermarking system is intended. This paper aims to produce a detailed survey of all watermarking techniques specially focuses on image watermarking types and its applications in today’s world

    INCREASE OF STABILITY AT JPEG COMPRESSION OF THE DIGITAL WATERMARKS EMBEDDED IN STILL IMAGES

    Get PDF
    Subject of Research. The paper deals with creation and research of method for increasing stability at JPEG compressing of digital watermarks embedded in still images. Method. A new algorithm of digital watermarking for still images which embeds digital watermark into a still image via modification of frequency coefficients for Hadamard discrete transformation is presented. The choice of frequency coefficients for embedding of a digital watermark is based on existence of sharp change of their values after modification at the maximum compression of JPEG. The choice of blocks of pixels for embedding is based on the value of their entropy. The new algorithm was subjected to the analysis of resistance to an image compression, noising, filtration, change of size, color and histogram equalization. Elham algorithm possessing a good resistance to JPEG compression was chosen for comparative analysis. Nine gray-scale images were selected as objects for protection. Obscurity of the distortions embedded in them was defined on the basis of the peak value of a signal to noise ratio which should be not lower than 43 dB for obscurity of the brought distortions. Resistibility of embedded watermark was determined by the Pearson correlation coefficient, which value should not be below 0.5 for the minimum allowed stability. The algorithm of computing experiment comprises: watermark embedding into each test image by the new algorithm and Elham algorithm; introducing distortions to the object of protection; extracting of embedded information with its subsequent comparison with the original. Parameters of the algorithms were chosen so as to provide approximately the same level of distortions introduced into the images. Main Results. The method of preliminary processing of digital watermark presented in the paper makes it possible to reduce significantly the volume of information embedded in the still image. The results of numerical experiment have shown that the proposed algorithm keeps higher resistance to JPEG compression, noising, Wiener filtering and brightness change. Practical Relevance. The proposed algorithm is applicable for copyright protection on the still images

    Data hiding in images based on fractal modulation and diversity combining

    Get PDF
    The current work provides a new data-embedding infrastructure based on fractal modulation. The embedding problem is tackled from a communications point of view. The data to be embedded becomes the signal to be transmitted through a watermark channel. The channel could be the image itself or some manipulation of the image. The image self noise and noise due to attacks are the two sources of noise in this paradigm. At the receiver, the image self noise has to be suppressed, while noise due to the attacks may sometimes be predicted and inverted. The concepts of fractal modulation and deterministic self-similar signals are extended to 2-dimensional images. These novel techniques are used to build a deterministic bi-homogenous watermark signal that embodies the binary data to be embedded. The binary data to be embedded, is repeated and scaled with different amplitudes at each level and is used as the wavelet decomposition pyramid. The binary data is appended with special marking data, which is used during demodulation, to identify and correct unreliable or distorted blocks of wavelet coefficients. This specially constructed pyramid is inverted using the inverse discrete wavelet transform to obtain the self-similar watermark signal. In the data embedding stage, the well-established linear additive technique is used to add the watermark signal to the cover image, to generate the watermarked (stego) image. Data extraction from a potential stego image is done using diversity combining. Neither the original image nor the original binary sequence (or watermark signal) is required during the extraction. A prediction of the original image is obtained using a cross-shaped window and is used to suppress the image self noise in the potential stego image. The resulting signal is then decomposed using the discrete wavelet transform. The number of levels and the wavelet used are the same as those used in the watermark signal generation stage. A thresholding process similar to wavelet de-noising is used to identify whether a particular coefficient is reliable or not. A decision is made as to whether a block is reliable or not based on the marking data present in each block and sometimes corrections are applied to the blocks. Finally the selected blocks are combined based on the diversity combining strategy to extract the embedded binary data

    Tatouage numérique des images dans le domaine des ondelettes basé sur la décomposition en valeurs singulières et l'optimisation multi-objective

    Get PDF
    Depuis l'extraordinaire révolution technique de l'analogique vers le numérique à la fin du 20ième siècle, les documents numériques sont devenus de plus en plus utilisés à cause de leur diffusion peu coûteuse et extrêmement rapide. Cependant ce passage de l'analogique vers le numérique ne s'est pas fait sans engendrer des inquiétudes en terme des droits d'auteurs. Des personnes non autorisées peuvent s'approprier des documents numériques pour faire des profits au dépends des propriétaires légitimes ayant les droits initiaux, puisque son contenu peut être facilement copié, modifié et distribué sans risque d'être détérioré. Dans cette optique, au début des années 1990, une nouvelle technique a été introduite qui s'inspire principalement de la cryptographie et la stéganographie : elle consiste à inscrire une marque dans un document numérique. Cette technique est nommée le tatouage numérique, en anglais digital watermarking. Cette thèse présente cinq différentes contributions relatives au domaine du tatouage numérique et du traitement d'image. La première contribution est la proposition de deux solutions au problème de la détection positive fausse de la marque constatée dans certains algorithmes de tatouage numérique basés sur la décomposition en valeurs singulières. L'une des solutions est basée sur les fonctions de hachage et l'autre sur le cryptage d'image. La deuxième contribution est la proposition d'un algorithme de cryptage d'image basé sur le principe du cube Rubik. La troisième contribution est la conception d'un algorithme de tatouage numérique basé sur la transformée en ondelettes à base du schéma de lifting (LWT) et la décomposition en valeurs singulières (SVD). Un facteur scalaire unique est utilisé pour contrôler l'intensité de l'insertion de la marque, et permet ainsi de trouver le meilleur compromis entre la robustesse et l'imperceptibilité du tatouage numérique. Cependant, l'utilisation des facteurs scalaires multiples au lieu d'un facteur scalaire unique est plus intéressante [CKLS97]. Toutefois, la détermination des valeurs optimales des facteurs scalaires multiples est un problème très difficile et complexe. Afin de trouver ces valeurs optimales, on a utilisé séparément l'optimisation multi-objective par algorithme génétique (MOGAO) et l'optimisation multi-objective par l'algorithme de colonie de fourmis (MOACO) qui sont considérés comme la quatrième et la cinquième contributions de cette thèse

    Watermarking scheme using slantlet transform and enhanced knight tour algorithm for medical images

    Get PDF
    Digital watermarking has been employed as an alternative solution to protect the medical healthcare system with a layer of protection applied directly on top of data stored. Medical image that is highly sensitive to the image processing and cannot tolerate any visual degradation has become the focus of digital watermarking. However, since watermarking introduces some changes on medical images, it is a challenge for medical image watermarking to maintain high imperceptibility and robustness at the same time. Research to date has tended to focus on the embedding method instead of the sequence of embedding of the watermarking itself. Also, although watermarking has been introduced into medical images as a layer of protection, it still cannot prevent a knowledgeable hacker from retrieving the watermark. Therefore, this research proposes a robust watermarking scheme with high imperceptibility for medical images to increase the effectiveness of the medical healthcare system in terms of perceptibility, embedding technique, embedding region and embedding sequence of the watermarking scheme. To increase imperceptibility of a watermark, this research introduces Dynamic Visibility Threshold, a new parameter that increases visual quality in terms of imperceptibility. It is a unique number which differs for each host image using descriptive statistics. In addition, two new concepts of embedding region, namely Embeddable zone (EBD) and Non-Embeddable zone (NEBD) to function as a non-parametric decision region to complicate the estimate of the detection function are also proposed. The sequence of embedding is shuffled using enhanced Knight Tour algorithm based on Slantlet Transform to increase the complexity of the watermarking scheme. A significant result from the Peak Signal-to-Noise Ratio (PSNR) evaluation showing approximately 270 dB was obtained, suggesting that this proposed medical image watermarking technique outperforms other contemporary techniques in the same working domain. Based on the experimental result using the standard dataset, all host images are resilient to Salt and Pepper Noise, Speckle Noise, Poisson Noise, Rotation and Sharpen Filter with minimum Bit Error Rate (BER) of 0.0426 and Normalized Cross-Correlation (NCC) value of as high as 1. Since quartile theory is used, this experiment has shown that among all three quartiles, the Third Quartile performs the best in functioning as Dynamic Visibility Threshold (DVT) with 0 for BER and 1 for NCC evaluation

    Robust data protection and high efficiency for IoTs streams in the cloud

    Get PDF
    Remotely generated streaming of the Internet of Things (IoTs) data has become a vital category upon which many applications rely. Smart meters collect readings for household activities such as power and gas consumption every second - the readings are transmitted wirelessly through various channels and public hops to the operation centres. Due to the unusually large streams sizes, the operation centres are using cloud servers where various entities process the data on a real-time basis for billing and power management. It is possible that smart pipe projects (where oil pipes are continuously monitored using sensors) and collected streams are sent to the public cloud for real-time flawed detection. There are many other similar applications that can render the world a convenient place which result in climate change mitigation and transportation improvement to name a few. Despite the obvious advantages of these applications, some unique challenges arise posing some questions regarding a suitable balance between guaranteeing the streams security, such as privacy, authenticity and integrity, while not hindering the direct operations on those streams, while also handling data management issues, such as the volume of protected streams during transmission and storage. These challenges become more complicated when the streams reside on third-party cloud servers. In this thesis, a few novel techniques are introduced to address these problems. We begin by protecting the privacy and authenticity of transmitted readings without disrupting the direct operations. We propose two steganography techniques that rely on different mathematical security models. The results look promising - security: only the approved party who has the required security tokens can retrieve the hidden secret, and distortion effect with the difference between the original and protected readings that are almost at zero. This means the streams can be used in their protected form at intermediate hops or third party servers. We then improved the integrity of the transmitted protected streams which are prone to intentional or unintentional noise - we proposed a secure error detection and correction based stenographic technique. This allows legitimate recipients to (1) detect and recover any noise loss from the hidden sensitive information without privacy disclosure, and (2) remedy the received protected readings by using the corrected version of the secret hidden data. It is evident from the experiments that our technique has robust recovery capabilities (i.e. Root Mean Square (RMS) <0.01%, Bit Error Rate (BER) = 0 and PRD < 1%). To solve the issue of huge transmitted protected streams, two compression algorithms for lossless IoTs readings are introduced to ensure the volume of protected readings at intermediate hops is reduced without revealing the hidden secrets. The first uses Gaussian approximation function to represent IoTs streams in a few parameters regardless of the roughness in the signal. The second reduces the randomness of the IoTs streams into a smaller finite field by splitting to enhance repetition and avoiding the floating operations round errors issues. Under the same conditions, our both techniques were superior to existing models mathematically (i.e. the entropy was halved) and empirically (i.e. achieved ratio was 3.8:1 to 4.5:1). We were driven by the question ‘Can the size of multi-incoming compressed protected streams be re-reduced on the cloud without decompression?’ to overcome the issue of vast quantities of compressed and protected IoTs streams on the cloud. A novel lossless size reduction algorithm was introduced to prove the possibility of reducing the size of already compressed IoTs protected readings. This is successfully achieved by employing similarity measurements to classify the compressed streams into subsets in order to reduce the effect of uncorrelated compressed streams. The values of every subset was treated independently for further reduction. Both mathematical and empirical experiments proved the possibility of enhancing the entropy (i.e. almost reduced by 50%) and the resultant size reduction (i.e. up to 2:1)
    corecore