8 research outputs found

    Detection of Hidden Object In Speech Based on Fast Fourier Transform Algorithm

    Get PDF
    In this paper steganalysis technique is proposed on the basis of spectral domain analysis using Discrete Fourier Transform, Fast Fourier Transform Algorithm (DFT_FFTA). The aim from using this algorithm is to provide robust evidence for presence of hidden object in speech segment. The Discrete Wavelet Transform (DWT) is used to decompose the speech segment, 20 seconds in length. The speech is decomposed to the third level. An image of 512x512 pixels embedded in the third level of the speech coefficients. Reverse Discrete Wavelet Transform (RDWT) is applied to get a speech with hidden object (image) called stego-speech. DFT_FFTA is used to analyze the stego-speech to discover an evidence of hidden object in the stegospeech .Experimental results shows that the proposed algorithm is comparable to previously existing techniques and give very clear and strong indication for the existence of stego-objec

    Image watermarking based on the space/spatial-frequency analysis and Hermite functions expansion

    Get PDF
    International audienceAn image watermarking scheme that combines Hermite functions expansion and space/spatial-frequency analysis is proposed. In the first step, the Hermite functions expansion is employed to select busy regions for watermark embedding. In the second step, the space/spatial-frequency representation and Hermite functions expansion are combined to design the imperceptible watermark, using the host local frequency content. The Hermite expansion has been done by using the fast Hermite projection method. Recursive realization of Hermite functions significantly speeds up the algorithms for regions selection and watermark design. The watermark detection is performed within the space/spatial-frequency domain. The detection performance is increased due to the high information redundancy in that domain in comparison with the space or frequency domains, respectively. The performance of the proposed procedure has been tested experimentally for different watermark strengths, i.e., for different values of the peak signal-to-noise ratio (PSNR). The proposed approach provides high detection performance even for high PSNR values. It offers a good compromise between detection performance (including the robustness to a wide variety of common attacks) and imperceptibility

    Watermarking scheme using slantlet transform and enhanced knight tour algorithm for medical images

    Get PDF
    Digital watermarking has been employed as an alternative solution to protect the medical healthcare system with a layer of protection applied directly on top of data stored. Medical image that is highly sensitive to the image processing and cannot tolerate any visual degradation has become the focus of digital watermarking. However, since watermarking introduces some changes on medical images, it is a challenge for medical image watermarking to maintain high imperceptibility and robustness at the same time. Research to date has tended to focus on the embedding method instead of the sequence of embedding of the watermarking itself. Also, although watermarking has been introduced into medical images as a layer of protection, it still cannot prevent a knowledgeable hacker from retrieving the watermark. Therefore, this research proposes a robust watermarking scheme with high imperceptibility for medical images to increase the effectiveness of the medical healthcare system in terms of perceptibility, embedding technique, embedding region and embedding sequence of the watermarking scheme. To increase imperceptibility of a watermark, this research introduces Dynamic Visibility Threshold, a new parameter that increases visual quality in terms of imperceptibility. It is a unique number which differs for each host image using descriptive statistics. In addition, two new concepts of embedding region, namely Embeddable zone (EBD) and Non-Embeddable zone (NEBD) to function as a non-parametric decision region to complicate the estimate of the detection function are also proposed. The sequence of embedding is shuffled using enhanced Knight Tour algorithm based on Slantlet Transform to increase the complexity of the watermarking scheme. A significant result from the Peak Signal-to-Noise Ratio (PSNR) evaluation showing approximately 270 dB was obtained, suggesting that this proposed medical image watermarking technique outperforms other contemporary techniques in the same working domain. Based on the experimental result using the standard dataset, all host images are resilient to Salt and Pepper Noise, Speckle Noise, Poisson Noise, Rotation and Sharpen Filter with minimum Bit Error Rate (BER) of 0.0426 and Normalized Cross-Correlation (NCC) value of as high as 1. Since quartile theory is used, this experiment has shown that among all three quartiles, the Third Quartile performs the best in functioning as Dynamic Visibility Threshold (DVT) with 0 for BER and 1 for NCC evaluation

    Wavelet Domain Watermark Detection and Extraction using the Vector-based Hidden Markov Model

    Get PDF
    Multimedia data piracy is a growing problem in view of the ease and simplicity provided by the internet in transmitting and receiving such data. A possible solution to preclude unauthorized duplication or distribution of digital data is watermarking. Watermarking is an identifiable piece of information that provides security against multimedia piracy. This thesis is concerned with the investigation of various image watermarking schemes in the wavelet domain using the statistical properties of the wavelet coefficients. The wavelet subband coefficients of natural images have significantly non-Gaussian and heavy-tailed features that are best described by heavy-tailed distributions. Moreover the wavelet coefficients of images have strong inter-scale and inter-orientation dependencies. In view of this, the vector-based hidden Markov model is found to be best suited to characterize the wavelet coefficients. In this thesis, this model is used to develop new digital image watermarking schemes. Additive and multiplicative watermarking schemes in the wavelet domain are developed in order to provide improved detection and extraction of the watermark. Blind watermark detectors using log-likelihood ratio test, and watermark decoders using the maximum likelihood criterion to blindly extract the embedded watermark bits from the observation data are designed. Extensive experiments are conducted throughout this thesis using a number of databases selected from a wide variety of natural images. Simulation results are presented to demonstrate the effectiveness of the proposed image watermarking scheme and their superiority over some of the state-of-the-art techniques. It is shown that in view of the use of the hidden Markov model characterize the distributions of the wavelet coefficients of images, the proposed watermarking algorithms result in higher detection and decoding rates both before and after subjecting the watermarked image to various kinds of attacks

    Secure covert communications over streaming media using dynamic steganography

    Get PDF
    Streaming technologies such as VoIP are widely embedded into commercial and industrial applications, so it is imperative to address data security issues before the problems get really serious. This thesis describes a theoretical and experimental investigation of secure covert communications over streaming media using dynamic steganography. A covert VoIP communications system was developed in C++ to enable the implementation of the work being carried out. A new information theoretical model of secure covert communications over streaming media was constructed to depict the security scenarios in streaming media-based steganographic systems with passive attacks. The model involves a stochastic process that models an information source for covert VoIP communications and the theory of hypothesis testing that analyses the adversary‘s detection performance. The potential of hardware-based true random key generation and chaotic interval selection for innovative applications in covert VoIP communications was explored. Using the read time stamp counter of CPU as an entropy source was designed to generate true random numbers as secret keys for streaming media steganography. A novel interval selection algorithm was devised to choose randomly data embedding locations in VoIP streams using random sequences generated from achaotic process. A dynamic key updating and transmission based steganographic algorithm that includes a one-way cryptographical accumulator integrated into dynamic key exchange for covert VoIP communications, was devised to provide secure key exchange for covert communications over streaming media. The discrete logarithm problem in mathematics and steganalysis using t-test revealed the algorithm has the advantage of being the most solid method of key distribution over a public channel. The effectiveness of the new steganographic algorithm for covert communications over streaming media was examined by means of security analysis, steganalysis using non parameter Mann-Whitney-Wilcoxon statistical testing, and performance and robustness measurements. The algorithm achieved the average data embedding rate of 800 bps, comparable to other related algorithms. The results indicated that the algorithm has no or little impact on real-time VoIP communications in terms of speech quality (< 5% change in PESQ with hidden data), signal distortion (6% change in SNR after steganography) and imperceptibility, and it is more secure and effective in addressing the security problems than other related algorithms

    Contourlet Domain Image Modeling and its Applications in Watermarking and Denoising

    Get PDF
    Statistical image modeling in sparse domain has recently attracted a great deal of research interest. Contourlet transform as a two-dimensional transform with multiscale and multi-directional properties is known to effectively capture the smooth contours and geometrical structures in images. The objective of this thesis is to study the statistical properties of the contourlet coefficients of images and develop statistically-based image denoising and watermarking schemes. Through an experimental investigation, it is first established that the distributions of the contourlet subband coefficients of natural images are significantly non-Gaussian with heavy-tails and they can be best described by the heavy-tailed statistical distributions, such as the alpha-stable family of distributions. It is shown that the univariate members of this family are capable of accurately fitting the marginal distributions of the empirical data and that the bivariate members can accurately characterize the inter-scale dependencies of the contourlet coefficients of an image. Based on the modeling results, a new method in image denoising in the contourlet domain is proposed. The Bayesian maximum a posteriori and minimum mean absolute error estimators are developed to determine the noise-free contourlet coefficients of grayscale and color images. Extensive experiments are conducted using a wide variety of images from a number of databases to evaluate the performance of the proposed image denoising scheme and to compare it with that of other existing schemes. It is shown that the proposed denoising scheme based on the alpha-stable distributions outperforms these other methods in terms of the peak signal-to-noise ratio and mean structural similarity index, as well as in terms of visual quality of the denoised images. The alpha-stable model is also used in developing new multiplicative watermark schemes for grayscale and color images. Closed-form expressions are derived for the log-likelihood-based multiplicative watermark detection algorithm for grayscale images using the univariate and bivariate Cauchy members of the alpha-stable family. A multiplicative multichannel watermark detector is also designed for color images using the multivariate Cauchy distribution. Simulation results demonstrate not only the effectiveness of the proposed image watermarking schemes in terms of the invisibility of the watermark, but also the superiority of the watermark detectors in providing detection rates higher than that of the state-of-the-art schemes even for the watermarked images undergone various kinds of attacks

    Tatouage robuste d’images imprimées

    Get PDF
    Invisible watermarking for ID images printed on plastic card support is a challenging problem that interests the industrial world. In this study, we developed a watermarking algorithm robust to various attacks present in this case. These attacks are mainly related to the print/scan process on the plastic support and the degradations that an ID card can encounter along its lifetime. The watermarking scheme operates in the Fourier domain as this transform has invariance properties against global geometrical transformations. A preventive method consists of pre-processing the host image before the embedding process that reduces the variance of the embeddable vector. A curative method comprises two counterattacks dealing with blurring and color variations. For a false alarm probability of 10⁻⁴, we obtained an average improvement of 22% over the reference method when only preventative method is used. The combination of the preventive and curative methods leads to a detection rate greater than 99%. The detection algorithm takes less than 1 second for a 512×512 image with a conventional computer, which is compatible with the industrial application in question.Le tatouage invisible d’images d’identité imprimées sur un support en plastique est un problème difficile qui intéresse le monde industriel. Dans cette étude, nous avons développé un algorithme de tatouage robuste aux diverses attaques présentes dans ce cas. Ces attaques sont liées aux processus d’impression/numérisation sur le support plastique ainsi qu’aux dégradations qu’une carte plastique peut rencontrer le long de sa durée de vie. La méthode de tatouage opère dans le domaine de Fourier car cette transformée présente des propriétés d’invariances aux attaques géométriques globales. Une méthode préventive consiste en un prétraitement de l’image originale avant le processus d’insertion qui réduit la variance du vecteur support de la marque. Une méthode corrective comporte deux contre-attaques corrigeant le flou et les variations colorimétriques. Pour une probabilité de fausse alarme de 10⁻⁴, nous avons obtenu une amélioration moyenne de 22% par rapport à la méthode de référence lorsque seule la méthode préventive est utilisée. La combinaison de la méthode préventive avec la méthode corrective correspond à un taux de détection supérieur à 99%. L’algorithme de détection prends moins de 1 seconde pour à une image de 512×512 pixels avec un ordinateur classique ce qui est compatible avec l’application industrielle visée

    Tatouage numérique des images dans le domaine des ondelettes basé sur la décomposition en valeurs singulières et l'optimisation multi-objective

    Get PDF
    Depuis l'extraordinaire révolution technique de l'analogique vers le numérique à la fin du 20ième siècle, les documents numériques sont devenus de plus en plus utilisés à cause de leur diffusion peu coûteuse et extrêmement rapide. Cependant ce passage de l'analogique vers le numérique ne s'est pas fait sans engendrer des inquiétudes en terme des droits d'auteurs. Des personnes non autorisées peuvent s'approprier des documents numériques pour faire des profits au dépends des propriétaires légitimes ayant les droits initiaux, puisque son contenu peut être facilement copié, modifié et distribué sans risque d'être détérioré. Dans cette optique, au début des années 1990, une nouvelle technique a été introduite qui s'inspire principalement de la cryptographie et la stéganographie : elle consiste à inscrire une marque dans un document numérique. Cette technique est nommée le tatouage numérique, en anglais digital watermarking. Cette thèse présente cinq différentes contributions relatives au domaine du tatouage numérique et du traitement d'image. La première contribution est la proposition de deux solutions au problème de la détection positive fausse de la marque constatée dans certains algorithmes de tatouage numérique basés sur la décomposition en valeurs singulières. L'une des solutions est basée sur les fonctions de hachage et l'autre sur le cryptage d'image. La deuxième contribution est la proposition d'un algorithme de cryptage d'image basé sur le principe du cube Rubik. La troisième contribution est la conception d'un algorithme de tatouage numérique basé sur la transformée en ondelettes à base du schéma de lifting (LWT) et la décomposition en valeurs singulières (SVD). Un facteur scalaire unique est utilisé pour contrôler l'intensité de l'insertion de la marque, et permet ainsi de trouver le meilleur compromis entre la robustesse et l'imperceptibilité du tatouage numérique. Cependant, l'utilisation des facteurs scalaires multiples au lieu d'un facteur scalaire unique est plus intéressante [CKLS97]. Toutefois, la détermination des valeurs optimales des facteurs scalaires multiples est un problème très difficile et complexe. Afin de trouver ces valeurs optimales, on a utilisé séparément l'optimisation multi-objective par algorithme génétique (MOGAO) et l'optimisation multi-objective par l'algorithme de colonie de fourmis (MOACO) qui sont considérés comme la quatrième et la cinquième contributions de cette thèse
    corecore