46 research outputs found

    Haar Transformation for Compressed Speech Hiding

    Get PDF
    علم الكتابة المغطاة هو واحد من أكثر العلوم شيوعا في مجال امنية المعلوم.  في هذا البحث ، سيتم تعديل خوارزمية لتضمين صوتك مكبوس داخل صورة رمادية باستخدام تحويل المويجات المتقطعة (Haar) . في البداية تم كبس بيانات الصوت  الى نصف حجمها الأصلي ومن  ثم تحويل البيانات المكبوسة من الترميز العشري إلى الترميز الثنائي وتضمينه داخل معاملات  الحزم الاتجاهية الاربعة (cA :Low Low ,cH :High Low ,cV:Low High,cD:High High)   الناتجة من تحليل صورة الغطاء Cover_Image باستخدام تحويل المويجة المتقطع Haar حيث ان cA   تمثل حزمة الترددات الواطئة و cH ,cV ,cD تمثل حزم الترددات العالية .         تم اختبار كفاءة الخوارزمية بقياس معاملات كفاءة الاخفاء (MSE,PSNR,SNR,Correlation) واظهرت النتائج صعوبة اكتشاف المراقب لصورة الغطاء الحاوية على البيانات السرية المطمورة.          تظهر نتائج هذا البحث أنه يمكننا بنجاح إخفاء بيانات الكلام (الصوت) في صورة رمادية ثم استخراجها مع معدل سعة  خزن  (1) خلية ثنائية (bit) لكل نقطة ضوئية   اي ان سعة الخزن باستخدام  الطريقة المقدمة يعتمد على حجم صورة الغطاء  وكذلك تبين انه معاملات الترددات العالية تكون افضل للاخفاء من حيث عدم ادراك  المتطفلين بانه يوجد بيانات سرية  داخل الوسط الحامل لها  stego_imag. Steganography  science is one of the most popular field in security direction. In this paper an algorithm will be adopted to embed a compressed speech inside a gray image using discrete wavelet (Haar transformation). In the beginning the speech was compressed up to its half original size by applying (Daubechies) then convert the speech data from decimal code to binary code and embed it inside Haar coefficients of the cover _image using the Four sub bands (cA : Low Low,cH: High Low,cV:Low High,cD: High High) which got by applying the wavelet on the cover_ image. Measuring Peak Signal to Noise Ratio (PSNR) to determine the accuracy of the stego_image with respect to the original image, MSE and the correlation factors were checked show that the proposed algorithm has positive effect in field of speech hiding.The proposed  technique in this research  turned out to be able to hide  speech data (audio) in the cover image and then extract the hidden data  with  storage rate (1) bits per pixel. Hiding capacity can be achieved using this method proportionally depends on cover_image size. High frequency coefficients have also been shown to be better for data hiding in terms of perceptibility and intruders' cannot be able to recognize the cover medium (stego_image) which included secret data

    Entropy Based Robust Watermarking Algorithm

    Get PDF
    Tänu aina kasvavale multimeedia andmeedastus mahtudele Internetis, on esile kerkinud mured turvalisusest ja piraatlusest. Digitaalse meedia paljundamise ja muutmise maht on loonud vajaduse digitaalse meedia vesimärgistamise järgi. Selles töös on tutvustatud vastupidavaid vesimärkide lisamise algoritme, mis lisavad vesimärgid madala entroopiaga pildi osadesse. Välja pakutud algoritmides jagatakse algne pilt blokkidesse ning arvutatakse iga bloki entroopia. Kõikide blokkide keskmine entroopia väärtus valitakse künniseks, mille järgi otsustatakse, millistesse blokkidesse vesimärk lisada. Kõik blokid, mille entroopia on väiksem kui künnis, viiakse signaali sageduse kujule kasutades Discrete Wavelet Transform algoritmi. Madala sagedusega sagedusvahemikule rakendatakse Chirp Z-Transform algoritmi ja saadud tulemusele LU-dekompositsiooni või QR-dekompositsiooni. Singular Value Decomposition meetodi rakendamisel diagonaalmaatriksile, mis saadi eelmisest sammust, saadakse iga bloki vastav väärtus. Vesimärk lisatakse pildile, liites iga bloki arvutatud väärtusele vesimärgi Singular Value Decomposition meetodi tulemused. Kirjeldatud algoritme testiti ning võrreldi teiste tavapärast ning uudsete vesimärkide lisamise tehnoloogiatega. Kvantitatiivsed ja kvalitatiivsed eksperimendid näitavad, et välja pakutud meetodid on tajumatud ning vastupidavad signaali töötlemise rünnakutele.With growth of digital media distributed over the Internet, concerns about security and piracy have emerged. The amount of digital media reproduction and tampering has brought a need for content watermarking. In this work, multiple robust watermarking algorithms are introduced. They embed watermark image into singular values of host image’s blocks with low entropy values. In proposed algorithms, host image is divided into blocks, and the entropy of each block is calculated. The average of all entropies indicates the chosen threshold value for selecting the blocks in which watermark image should be embedded. All blocks with entropy lower than the calculated threshold are decomposed into frequency subbands using discrete wavelet transform (DWT). Subsequently chirp z-transform (CZT) is applied to the low-frequency subband followed by an appropriate matrix decomposition such as lower and upper decomposition (LUD) or orthogonal-triangular decomposition (QR decomposition). By applying singular value decomposition (SVD) to diagonal matrices obtained by the aforementioned matrix decompositions, the singular values of each block are calculated. Watermark image is embedded by adding singular values of the watermark image to singular values of the low entropy blocks. Proposed algorithms are tested on many host and watermark images, and they are compared with conventional and other state-of-the-art watermarking techniques. The quantitative and qualitative experimental results are indicating that the proposed algorithms are imperceptible and robust against many signal processing attacks

    Joint watermarking and encryption of color images in the Fibonacci-Haar domain

    Get PDF
    A novel method for watermarking and ciphering color images, based on the joint use of a key-dependent wavelet transform with a secure cryptographic scheme, is presented. The system allows to watermark encrypted data without requiring the knowledge of the original data and also to cipher watermarked data without damaging the embedded signal. Since different areas of the proposed transform domain are used for encryption and watermarking, the extraction of the hidden information can be performed without deciphering the cover data and it is also possible to decipher watermarked data without removing the watermark. Experimental results show the effectiveness of the proposed scheme

    A survey of digital image watermarking techniques

    Get PDF
    Watermarking, which belong to the information hiding field, has seen a lot of research interest recently. There is a lot of work begin conducted in different branches in this field. Steganography is used for secret conmunication, whereas watermarking is used for content protection, copyright management, content authentication and tamper detection. In this paper we present a detailed survey of existing and newly proposed steganographic and watenmarking techniques. We classify the techniques based on different domains in which data is embedded. Here we limit the survey to images only

    A High Payload Steganography Mechanism Based on Wavelet Packet Transformation and Neutrosophic Set

    Get PDF
    In this paper a steganographic method is proposed to improve the capacity of the hidden secret data and to provide an imperceptible stego-image quality. The proposed steganography algorithm is based on the wavelet packet decomposition (WPD) and neutrosophic set. First, an original image is decomposed into wavelet packet coefficients. Second, the generalized parent-child relationships of spatial orientation trees for wavelet packet decomposition are established among the wavelet packet subbands. An edge detector based on the neutrosophic set named (NSED) is then introduced and applied on a number of subbands. This leads to classify each wavelet packet tree into edge/non-edge tree to embed more secret bits into the coefficients in the edge tree than those in the non-edge tree. The embedding is done based on the least significant bit substitution scheme. Experimental results demonstrate that the proposed method achieves higher embedding capacity with better imperceptibility compared to the published steganographic methods

    Robust digital image watermarking algorithms for copyright protection

    Get PDF
    Digital watermarking has been proposed as a solution to the problem of resolving copyright ownership of multimedia data (image, audio, video). The work presented in this thesis is concerned with the design of robust digital image watermarking algorithms for copyright protection. Firstly, an overview of the watermarking system, applications of watermarks as well as the survey of current watermarking algorithms and attacks, are given. Further, the implementation of feature point detectors in the field of watermarking is introduced. A new class of scale invariant feature point detectors is investigated and it is showed that they have excellent performances required for watermarking. The robustness of the watermark on geometrical distortions is very important issue in watermarking. In order to detect the parameters of undergone affine transformation, we propose an image registration technique which is based on use of the scale invariant feature point detector. Another proposed technique for watermark synchronization is also based on use of scale invariant feature point detector. This technique does not use the original image to determine the parameters of affine transformation which include rotation and scaling. It is experimentally confirmed that this technique gives excellent results under tested geometrical distortions. In the thesis, two different watermarking algorithms are proposed in the wavelet domain. The first algorithm belongs to the class of additive watermarking algorithms which requires the presence of original image for watermark detection. Using this algorithm the influence of different error correction codes on the watermark robustness is investigated. The second algorithm does not require the original image for watermark detection. The robustness of this algorithm is tested on various filtering and compression attacks. This algorithm is successfully combined with the aforementioned synchronization technique in order to achieve the robustness on geometrical attacks. The last watermarking algorithm presented in the thesis is developed in complex wavelet domain. The complex wavelet transform is described and its advantages over the conventional discrete wavelet transform are highlighted. The robustness of the proposed algorithm was tested on different class of attacks. Finally, in the thesis the conclusion is given and the main future research directions are suggested

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    Techniques for enhancing digital images

    Get PDF
    The images obtain from either research studies or optical instruments are often corrupted with noise. Image denoising involves the manipulation of image data to produce a visually high quality image. This thesis reviews the existing denoising algorithms and the filtering approaches available for enhancing images and/or data transmission. Spatial-domain and Transform-domain digital image filtering algorithms have been used in the past to suppress different noise models. The different noise models can be either additive or multiplicative. Selection of the denoising algorithm is application dependent. It is necessary to have knowledge about the noise present in the image so as to select the appropriated denoising algorithm. Noise models may include Gaussian noise, Salt and Pepper noise, Speckle noise and Brownian noise. The Wavelet Transform is similar to the Fourier transform with a completely different merit function. The main difference between Wavelet transform and Fourier transform is that, in the Wavelet Transform, Wavelets are localized in both time and frequency. In the standard Fourier Transform, Wavelets are only localized in frequency. Wavelet analysis consists of breaking up the signal into shifted and scales versions of the original (or mother) Wavelet. The Wiener Filter (mean squared estimation error) finds implementations as a LMS filter (least mean squares), RLS filter (recursive least squares), or Kalman filter. Quantitative measure (metrics) of the comparison of the denoising algorithms is provided by calculating the Peak Signal to Noise Ratio (PSNR), the Mean Square Error (MSE) value and the Mean Absolute Error (MAE) evaluation factors. A combination of metrics including the PSNR, MSE, and MAE are often required to clearly assess the model performance

    Robust steganographic techniques for secure biometric-based remote authentication

    Get PDF
    Biometrics are widely accepted as the most reliable proof of identity, entitlement to services, and for crime-related forensics. Using biometrics for remote authentication is becoming an essential requirement for the development of knowledge-based economy in the digital age. Ensuring security and integrity of the biometric data or templates is critical to the success of deployment especially because once the data compromised the whole authentication system is compromised with serious consequences for identity theft, fraud as well as loss of privacy. Protecting biometric data whether stored in databases or transmitted over an open network channel is a serious challenge and cryptography may not be the answer. The main premise of this thesis is that Digital Steganography can provide an alternative security solutions that can be exploited to deal with the biometric transmission problem. The main objective of the thesis is to design, develop and test steganographic tools to support remote biometric authentication. We focus on investigating the selection of biometrics feature representations suitable for hiding in natural cover images and designing steganography systems that are specific for hiding such biometric data rather than being suitable for general purpose. The embedding schemes are expected to have high security characteristics resistant to several types of steganalysis tools and maintain accuracy of recognition post embedding. We shall limit our investigations to embedding face biometrics, but the same challenges and approaches should help in developing similar embedding schemes for other biometrics. To achieve this our investigations and proposals are done in different directions which explain in the rest of this section. Reviewing the literature on the state-of-art in steganography has revealed a rich source of theoretical work and creative approaches that have helped generate a variety of embedding schemes as well as steganalysis tools but almost all focused on embedding random looking secrets. The review greatly helped in identifying the main challenges in the field and the main criteria for success in terms of difficult to reconcile requirements on embedding capacity, efficiency of embedding, robustness against steganalysis attacks, and stego image quality. On the biometrics front the review revealed another rich source of different face biometric feature vectors. The review helped shaping our primary objectives as (1) identifying a binarised face feature factor with high discriminating power that is susceptible to embedding in images, (2) develop a special purpose content-based steganography schemes that can benefit from the well-defined structure of the face biometric data in the embedding procedure while preserving accuracy without leaking information about the source biometric data, and (3) conduct sufficient sets of experiments to test the performance of the developed schemes, highlight the advantages as well as limitations, if any, of the developed system with regards to the above mentioned criteria. We argue that the well-known LBP histogram face biometric scheme satisfies the desired properties and we demonstrate that our new more efficient wavelet based versions called LBPH patterns is much more compact and has improved accuracy. In fact the wavelet version schemes reduce the number of features by 22% to 72% of the original version of LBP scheme guaranteeing better invisibility post embedding. We shall then develop 2 steganographic schemes. The first is the LSB-witness is a general purpose scheme that avoids changing the LSB-plane guaranteeing robustness against targeted steganalysis tools, but establish the viability of using steganography for remote biometric-based recognition. However, it may modify the 2nd LSB of cover pixels as a witness for the presence of the secret bits in the 1st LSB and thereby has some disadvantages with regards to the stego image quality. Our search for a new scheme that exploits the structure of the secret face LBPH patterns for improved stego image quality has led to the development of the first content-based steganography scheme. Embedding is guided by searching for similarities between the LBPH patterns and the structure of the cover image LSB bit-planes partitioned into 8-bit or 4-bit patterns. We shall demonstrate the excellent benefits of using content-based embedding scheme in terms of improved stego image quality, greatly reduced payload, reduced lower bound on optimal embedding efficiency, robustness against all targeted steganalysis tools. Unfortunately our scheme was not robust against the blind or universal SRM steganalysis tool. However we demonstrated robustness against SRM at low payload when our scheme was modified by restricting embedding to edge and textured pixels. The low payload in this case is sufficient to embed a secret full face LBPH patterns. Our work opens new exciting opportunities to build successful real applications of content-based steganography and presents plenty of research challenges
    corecore