45 research outputs found

    A novel Watermarking Technique Based on Hybrid Transforms

    Get PDF
    This paper proposed Anovel watermarking scheme using hybrid of  Dual Tree Complex Wavelet Transform and singular value decomposition . Image watermarking is to embed copyright data in image bit streams. Our proposed technique demonestrates  the effective and robust of image watermarking algorithms using a hybrid of two strong mathematical transforms; the 2-level Dual Tree Complex Wavelet Transform (DT-CWT) and Singular Value Decomposition (SVD). This technique shows high level of security and robustness against attacks. The algorithm was tested for imperceptibility and robustness and the results were compared with DWT-SVD-based technique, it is shown that the proposed watermarking schemes is considerably more robust and effective

    Robust watermarking for magnetic resonance images with automatic region of interest detection

    Get PDF
    Medical image watermarking requires special considerations compared to ordinary watermarking methods. The first issue is the detection of an important area of the image called the Region of Interest (ROI) prior to starting the watermarking process. Most existing ROI detection procedures use manual-based methods, while in automated methods the robustness against intentional or unintentional attacks has not been considered extensively. The second issue is the robustness of the embedded watermark against different attacks. A common drawback of existing watermarking methods is their weakness against salt and pepper noise. The research carried out in this thesis addresses these issues of having automatic ROI detection for magnetic resonance images that are robust against attacks particularly the salt and pepper noise and designing a new watermarking method that can withstand high density salt and pepper noise. In the ROI detection part, combinations of several algorithms such as morphological reconstruction, adaptive thresholding and labelling are utilized. The noise-filtering algorithm and window size correction block are then introduced for further enhancement. The performance of the proposed ROI detection is evaluated by computing the Comparative Accuracy (CA). In the watermarking part, a combination of spatial method, channel coding and noise filtering schemes are used to increase the robustness against salt and pepper noise. The quality of watermarked image is evaluated using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the accuracy of the extracted watermark is assessed in terms of Bit Error Rate (BER). Based on experiments, the CA under eight different attacks (speckle noise, average filter, median filter, Wiener filter, Gaussian filter, sharpening filter, motion, and salt and pepper noise) is between 97.8% and 100%. The CA under different densities of salt and pepper noise (10%-90%) is in the range of 75.13% to 98.99%. In the watermarking part, the performance of the proposed method under different densities of salt and pepper noise measured by total PSNR, ROI PSNR, total SSIM and ROI SSIM has improved in the ranges of 3.48-23.03 (dB), 3.5-23.05 (dB), 0-0.4620 and 0-0.5335 to 21.75-42.08 (dB), 20.55-40.83 (dB), 0.5775-0.8874 and 0.4104-0.9742 respectively. In addition, the BER is reduced to the range of 0.02% to 41.7%. To conclude, the proposed method has managed to significantly improve the performance of existing medical image watermarking methods

    An Improved Imperceptibility and Robustness of 4x4 DCT-SVD Image Watermarking with a Modified Entropy

    Get PDF
    A digital protection against unauthorized distribution of digital multimedia is highly on demand. Digital watermarking is a defence in multimedia protection for authorized ownership. This paper proposes an improved watermarking based on 4×4 DCT-SVD blocks using modified entropy in image watermarking. A modified entropy is used to select unnoticeable blocks. The proposed watermarking scheme utilizes the lowest entropy values to determine unnoticeable regions of the watermarked image. This paper investigates the relationship between U(2,1) and U(3,1) coefficients of the U matrix 4×4 DCT-SVD in image watermarking. The proposed watermarking scheme produces a great level of robustness and imperceptibility of the watermarked image against different attacks. The proposed scheme shows the improvement in terms of structural similarity index and normalized correlation of the watermarked image

    Watermarking techniques for genuine fingerprint authentication.

    Get PDF
    Fingerprints have been used to authenticate people remotely and allow them access to a system. However, the fingerprint-capture sensor is cracked easily using false fingerprint features constructed from a glass surface. Fake fingerprints, which can be easily obtained by attackers, could cheat the system and this issue remains a challenge in fingerprint-based authentication systems. Thus, a mechanism that can validate the originality of fingerprint samples is desired. Watermarking techniques have been used to enhance the fingerprint-based authentication process, however, none of them have been found to satisfy genuine person verification requirements. This thesis focuses on improving the verification of the genuine fingerprint owner using watermarking techniques. Four research issues are being addressed to achieve the main aim of this thesis. The first research task was to embed watermark into fingerprint images collected from different angles. In verification systems, an acquired fingerprint image is compared with another image, which was stored in the database at the time of enrolment. The displacements and rotations of fingerprint images collected from different angles lead to different sets of minutiae. In this case, the fingerprint-based authentication system operates on the ‘close enough’ matching principle between samples and template. A rejection of genuine samples can occur erroneously in such cases. The process of embedding watermarks into fingerprint samples could make this worse by adding spurious minutiae or corrupting correct minutiae. Therefore, a watermarking method for fingerprint images collected from different angles is proposed. Second, embedding high payload of watermark into fingerprint image and preserving the features of the fingerprint from being affected by the embedded watermark is challenging. In this scenario, embedding multiple watermarks that can be used with fingerprint to authenticate the person is proposed. In the developed multi-watermarks schema, two watermark images of high payloads are embedded into fingerprints without significantly affecting minutiae. Third, the robustness of the watermarking approach against image processing operations is important. The implemented fingerprint watermarking algorithms have been proposed to verify the origin of the fingerprint image; however, they are vulnerable to several modes of image operations that can affect the security level of the authentication system. The embedded watermarks, and the fingerprint features that are used subsequently for authentication purposes, can be damaged. Therefore, the current study has evaluated in detail the robustness of the proposed watermarking methods to the most common image operations. Fourth, mobile biometrics are expected to link the genuine user to a claimed identity in ubiquitous applications, which is a great challenge. Touch-based sensors for capturing fingerprints have been incorporated into mobile phones for user identity authentication. However, an individual fake fingerprint cracking the sensor on the iPhone 5S is a warning that biometrics are only a representation of a person, and are not secure. To make thing worse, the ubiquity of mobile devices leaves much room for adversaries to clone, impersonate or fabricate fake biometric identities and/or mobile devices to defraud systems. Therefore, the integration of multiple identifiers for both the capturing device and its owner into one unique entity is proposed

    Robust Logo Watermarking

    Get PDF
    Digital image watermarking is used to protect the copyright of digital images. In this thesis, a novel blind logo image watermarking technique for RGB images is proposed. The proposed technique exploits the error correction capabilities of the Human Visual System (HVS). It embeds two different watermarks in the wavelet/multiwavelet domains. The two watermarks are embedded in different sub-bands, are orthogonal, and serve different purposes. One is a high capacity multi-bit watermark used to embed the logo, and the other is a 1-bit watermark which is used for the detection and reversal of geometrical attacks. The two watermarks are both embedded using a spread spectrum approach, based on a pseudo-random noise (PN) sequence and a unique secret key. Robustness against geometric attacks such as Rotation, Scaling, and Translation (RST) is achieved by embedding the 1-bit watermark in the Wavelet Transform Modulus Maxima (WTMM) coefficients of the wavelet transform. Unlike normal wavelet coefficients, WTMM coefficients are shift invariant, and this important property is used to facilitate the detection and reversal of RST attacks. The experimental results show that the proposed watermarking technique has better distortion parameter detection capabilities, and compares favourably against existing techniques in terms of robustness against geometrical attacks such as rotation, scaling, and translation

    Triple scheme based on image steganography to improve imperceptibility and security

    Get PDF
    A foremost priority in the information technology and communication era is achieving an effective and secure steganography scheme when considering information hiding. Commonly, the digital images are used as the cover for the steganography owing to their redundancy in the representation, making them hidden to the intruders. Nevertheless, any steganography system launched over the internet can be attacked upon recognizing the stego cover. Presently, the design and development of an effective image steganography system are facing several challenging issues including the low capacity, poor security, and imperceptibility. Towards overcoming the aforementioned issues, a new decomposition scheme was proposed for image steganography with a new approach known as a Triple Number Approach (TNA). In this study, three main stages were used to achieve objectives and overcome the issues of image steganography, beginning with image and text preparation, followed by embedding and culminating in extraction. Finally, the evaluation stage employed several evaluations in order to benchmark the results. Different contributions were presented with this study. The first contribution was a Triple Text Coding Method (TTCM), which was related to the preparation of secret messages prior to the embedding process. The second contribution was a Triple Embedding Method (TEM), which was related to the embedding process. The third contribution was related to security criteria which were based on a new partitioning of an image known as the Image Partitioning Method (IPM). The IPM proposed a random pixel selection, based on image partitioning into three phases with three iterations of the Hénon Map function. An enhanced Huffman coding algorithm was utilized to compress the secret message before TTCM process. A standard dataset from the Signal and Image Processing Institute (SIPI) containing color and grayscale images with 512 x 512 pixels were utilised in this study. Different parameters were used to test the performance of the proposed scheme based on security and imperceptibility (image quality). In image quality, four important measurements that were used are Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Mean Square Error (MSE) and Histogram analysis. Whereas, two security measurements that were used are Human Visual System (HVS) and Chi-square (X2) attacks. In terms of PSNR and SSIM, the Lena grayscale image obtained results were 78.09 and 1 dB, respectively. Meanwhile, the HVS and X2 attacks obtained high results when compared to the existing scheme in the literature. Based on the findings, the proposed scheme give evidence to increase capacity, imperceptibility, and security to overcome existing issues

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    Layer-based Privacy and Security Architecture for Cloud Data Sharing

    Get PDF
    The management of data while maintaining its utility and preservation of security scheme is a matter of concern for the cloud owner. In order to minimize the overhead at cloud service provider of applying security over each document and then send it to the client, we propose a layered architecture. This approach maintains security of the sensitive document and privacy of its data sensitivity. To make a balance between data security and utility, the proposed approach categorizes the data according to its sensitivity. Perseverance of various categorization requires different algorithmic schemes. We set up a cloud distributed environment where data is categorized into four levels of sensitivity; public, confidential, secret, top secret and a different approach has been used to preserve the security at each level. At the most sensitive layers i.e. secret and top secret data, we made a provision to detect the faulty node that is responsible for data leakage. Finally, experimental analysis is carried out to analyze the performance of the layered approach. The experimental results show that time taken (in ms) in processing 200 documents of size 20 MB is 437, 2239, 3142, 3900 for public, confidential, secret and top secret data respectively when the documents are distributed among distinct users, which proves the practicality of the proposed approach

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
    corecore