91 research outputs found

    Combining Haar Wavelet and Karhunen Loeve Transforms for Medical Images Watermarking

    No full text
    International audienceThis paper presents a novel watermarking method, applied to the medical imaging domain, used to embed the patient's data into the corresponding image or set of images used for the diagnosis. The main objective behind the proposed technique is to perform the watermarking of the medical images in such a way that the three main attributes of the hidden information (i.e. imperceptibility, robustness, and integration rate) can be jointly ameliorated as much as possible. These attributes determine the effectiveness of the watermark, resistance to external attacks and increase the integration rate. In order to improve the robustness, a combination of the characteristics of Discrete Wavelet and Karhunen Loeve Transforms is proposed. The Karhunen Loeve Transform is applied on the sub-blocks (sized 8x8) of the different wavelet coefficients (in the HL2, LH2 and HH2 subbands). In this manner, the watermark will be adapted according to the energy values of each of the Karhunen Loeve components, with the aim of ensuring a better watermark extraction under various types of attacks. For the correct identification of inserted data, the use of an Errors Correcting Code (ECC) mechanism is required for the check and, if possible, the correction of errors introduced into the inserted data. Concerning the enhancement of the imperceptibility factor, the main goal is to determine the optimal value of the visibility factor, which depends on several parameters of the DWT and the KLT transforms. As a first step, a Fuzzy Inference System (FIS) has been set up and then applied to determine an initial visibility factor value. Several features extracted from the Co-Occurrence matrix are used as an input to the FIS and used to determine an initial visibility factor for each block; these values are subsequently re-weighted in function of the eigenvalues extracted from each sub-block. Regarding the integration rate, the previous works insert one bit per coefficient. In our proposal, the integration of the data to be hidden is 3 bits per coefficient so that we increase the integration rate by a factor of magnitude 3

    Adaptive Blind Watermarking Using Psychovisual Image Features

    Full text link
    With the growth of editing and sharing images through the internet, the importance of protecting the images' authorship has increased. Robust watermarking is a known approach to maintaining copyright protection. Robustness and imperceptibility are two factors that are tried to be maximized through watermarking. Usually, there is a trade-off between these two parameters. Increasing the robustness would lessen the imperceptibility of the watermarking. This paper proposes an adaptive method that determines the strength of the watermark embedding in different parts of the cover image regarding its texture and brightness. Adaptive embedding increases the robustness while preserving the quality of the watermarked image. Experimental results also show that the proposed method can effectively reconstruct the embedded payload in different kinds of common watermarking attacks. Our proposed method has shown good performance compared to a recent technique.Comment: 5 pages, 3 figure

    A High Payload Steganography Mechanism Based on Wavelet Packet Transformation and Neutrosophic Set

    Get PDF
    In this paper a steganographic method is proposed to improve the capacity of the hidden secret data and to provide an imperceptible stego-image quality. The proposed steganography algorithm is based on the wavelet packet decomposition (WPD) and neutrosophic set. First, an original image is decomposed into wavelet packet coefficients. Second, the generalized parent-child relationships of spatial orientation trees for wavelet packet decomposition are established among the wavelet packet subbands. An edge detector based on the neutrosophic set named (NSED) is then introduced and applied on a number of subbands. This leads to classify each wavelet packet tree into edge/non-edge tree to embed more secret bits into the coefficients in the edge tree than those in the non-edge tree. The embedding is done based on the least significant bit substitution scheme. Experimental results demonstrate that the proposed method achieves higher embedding capacity with better imperceptibility compared to the published steganographic methods

    Fingerprint Recognition in Biometric Security -A State of the Art

    Get PDF
    Today, because of the vulnerability of standard authentication system, law-breaking has accumulated within the past few years. Identity authentication that relies on biometric feature like face, iris, voice, hand pure mathematics, handwriting, retina, fingerprints will considerably decrease the fraud. so that they square measure being replaced by identity verification mechanisms. Among bioscience, fingerprint systems are one amongst most generally researched and used. it\'s fashionable due to their easy accessibility. during this paper we tend to discuss the elaborated study of various gift implementation define strategies together with their comparative measures and result analysis thus as realize a brand new constructive technique for fingerprint recognition

    Experimental Approach On Thresholding Using Reverse Biorthogonal Wavelet Decomposition For Eye Image

    Get PDF
    This study focus on compression in wavelet decomposition for security in biometric data. The objectives of this research are two folds: a) to investigate whether compressed human eye image differ with the original eye and b) to obtain the compression ratio values using proposed methods. The experiments have been conducted to explore the application of sparsity-norm balance and sparsity-norm balance square root techniques in wavelet decomposition. The eye image with [320x280] dimension is used through the wavelet 2D tool of Matlab. The results showed that, the percentage of coefficients before compression energy was 99.65% and number of zeros were 97.99%. However, the percentage of energy was 99.97%, increased while the number of zeros was same after compression. Based on our findings, the impact of the compression produces different ratio and with minimal lost after the compression. The future work should imply in artificial intelligent area for protecting biometric data

    Contourlet Domain Image Modeling and its Applications in Watermarking and Denoising

    Get PDF
    Statistical image modeling in sparse domain has recently attracted a great deal of research interest. Contourlet transform as a two-dimensional transform with multiscale and multi-directional properties is known to effectively capture the smooth contours and geometrical structures in images. The objective of this thesis is to study the statistical properties of the contourlet coefficients of images and develop statistically-based image denoising and watermarking schemes. Through an experimental investigation, it is first established that the distributions of the contourlet subband coefficients of natural images are significantly non-Gaussian with heavy-tails and they can be best described by the heavy-tailed statistical distributions, such as the alpha-stable family of distributions. It is shown that the univariate members of this family are capable of accurately fitting the marginal distributions of the empirical data and that the bivariate members can accurately characterize the inter-scale dependencies of the contourlet coefficients of an image. Based on the modeling results, a new method in image denoising in the contourlet domain is proposed. The Bayesian maximum a posteriori and minimum mean absolute error estimators are developed to determine the noise-free contourlet coefficients of grayscale and color images. Extensive experiments are conducted using a wide variety of images from a number of databases to evaluate the performance of the proposed image denoising scheme and to compare it with that of other existing schemes. It is shown that the proposed denoising scheme based on the alpha-stable distributions outperforms these other methods in terms of the peak signal-to-noise ratio and mean structural similarity index, as well as in terms of visual quality of the denoised images. The alpha-stable model is also used in developing new multiplicative watermark schemes for grayscale and color images. Closed-form expressions are derived for the log-likelihood-based multiplicative watermark detection algorithm for grayscale images using the univariate and bivariate Cauchy members of the alpha-stable family. A multiplicative multichannel watermark detector is also designed for color images using the multivariate Cauchy distribution. Simulation results demonstrate not only the effectiveness of the proposed image watermarking schemes in terms of the invisibility of the watermark, but also the superiority of the watermark detectors in providing detection rates higher than that of the state-of-the-art schemes even for the watermarked images undergone various kinds of attacks

    Image forgery detection using textural features and deep learning

    Full text link
    La croissance exponentielle et les progrès de la technologie ont rendu très pratique le partage de données visuelles, d'images et de données vidéo par le biais d’une vaste prépondérance de platesformes disponibles. Avec le développement rapide des technologies Internet et multimédia, l’efficacité de la gestion et du stockage, la rapidité de transmission et de partage, l'analyse en temps réel et le traitement des ressources multimédias numériques sont progressivement devenus un élément indispensable du travail et de la vie de nombreuses personnes. Sans aucun doute, une telle croissance technologique a rendu le forgeage de données visuelles relativement facile et réaliste sans laisser de traces évidentes. L'abus de ces données falsifiées peut tromper le public et répandre la désinformation parmi les masses. Compte tenu des faits mentionnés ci-dessus, la criminalistique des images doit être utilisée pour authentifier et maintenir l'intégrité des données visuelles. Pour cela, nous proposons une technique de détection passive de falsification d'images basée sur les incohérences de texture et de bruit introduites dans une image du fait de l'opération de falsification. De plus, le réseau de détection de falsification d'images (IFD-Net) proposé utilise une architecture basée sur un réseau de neurones à convolution (CNN) pour classer les images comme falsifiées ou vierges. Les motifs résiduels de texture et de bruit sont extraits des images à l'aide du motif binaire local (LBP) et du modèle Noiseprint. Les images classées comme forgées sont ensuite utilisées pour mener des expériences afin d'analyser les difficultés de localisation des pièces forgées dans ces images à l'aide de différents modèles de segmentation d'apprentissage en profondeur. Les résultats expérimentaux montrent que l'IFD-Net fonctionne comme les autres méthodes de détection de falsification d'images sur l'ensemble de données CASIA v2.0. Les résultats discutent également des raisons des difficultés de segmentation des régions forgées dans les images du jeu de données CASIA v2.0.The exponential growth and advancement of technology have made it quite convenient for people to share visual data, imagery, and video data through a vast preponderance of available platforms. With the rapid development of Internet and multimedia technologies, performing efficient storage and management, fast transmission and sharing, real-time analysis, and processing of digital media resources has gradually become an indispensable part of many people’s work and life. Undoubtedly such technological growth has made forging visual data relatively easy and realistic without leaving any obvious visual clues. Abuse of such tampered data can deceive the public and spread misinformation amongst the masses. Considering the facts mentioned above, image forensics must be used to authenticate and maintain the integrity of visual data. For this purpose, we propose a passive image forgery detection technique based on textural and noise inconsistencies introduced in an image because of the tampering operation. Moreover, the proposed Image Forgery Detection Network (IFD-Net) uses a Convolution Neural Network (CNN) based architecture to classify the images as forged or pristine. The textural and noise residual patterns are extracted from the images using Local Binary Pattern (LBP) and the Noiseprint model. The images classified as forged are then utilized to conduct experiments to analyze the difficulties in localizing the forged parts in these images using different deep learning segmentation models. Experimental results show that both the IFD-Net perform like other image forgery detection methods on the CASIA v2.0 dataset. The results also discuss the reasons behind the difficulties in segmenting the forged regions in the images of the CASIA v2.0 dataset

    Fingerprint Recognition in Biometric Security -A State of the Art

    Get PDF
    Today, because of the vulnerability of standard authentication system, law-breaking has accumulated within the past few years. Identity authentication that relies on biometric feature like face, iris, voice, hand pure mathematics, handwriting, retina, fingerprints will considerably decrease the fraud. so that they square measure being replaced by identity verification mechanisms. Among bioscience, fingerprint systems are one amongst most generally researched and used. it\'s fashionable due to their easy accessibility. during this paper we tend to discuss the elaborated study of various gift implementation define strategies together with their comparative measures and result analysis thus as realize a brand new constructive technique for fingerprint recognition
    • …
    corecore