8 research outputs found

    Copyright Protection of Color Imaging Using Robust-Encoded Watermarking

    Get PDF
    In this paper we present a robust-encoded watermarking method applied to color images for copyright protection, which presents robustness against several geometric and signal processing distortions. Trade-off between payload, robustness and imperceptibility is a very important aspect which has to be considered when a watermark algorithm is designed. In our proposed scheme, previously to be embedded into the image, the watermark signal is encoded using a convolutional encoder, which can perform forward error correction achieving better robustness performance. Then, the embedding process is carried out through the discrete cosine transform domain (DCT) of an image using the image normalization technique to accomplish robustness against geometric and signal processing distortions. The embedded watermark coded bits are extracted and decoded using the Viterbi algorithm. In order to determine the presence or absence of the watermark into the image we compute the bit error rate (BER) between the recovered and the original watermark data sequence. The quality of the watermarked image is measured using the well-known indices: Peak Signal to Noise Ratio (PSNR), Visual Information Fidelity (VIF) and Structural Similarity Index (SSIM). The color difference between the watermarked and original images is obtained by using the Normalized Color Difference (NCD) measure. The experimental results show that the proposed method provides good performance in terms of imperceptibility and robustness. The comparison among the proposed and previously reported methods based on different techniques is also provided

    Evolutionary multi-objective optimization of trace transform for invariant feature extraction

    Get PDF
    Trace transform is one representation of images that uses different functionals applied on the image function. When the functional is integral, it becomes identical to the well-known Radon transform, which is a useful tool in computed tomography medical imaging. The key question in Trace transform is to select the best combination of the Trace functionals to produce the optimal triple feature, which is a challenging task. In this paper, we adopt a multi-objective evolutionary algorithm adapted from the elitist non-dominated sorting genetic algorithm (NSGA-II), an evolutionary algorithm that has shown to be very efficient for multi-objective optimization, to select the best functionals as well as the optimal number of projections used in Trace transform to achieve invariant image identification. This is achieved by minimizing the within-class variance and maximizing the between-class variance. To enhance the computational efficiency, the Trace parameters are calculated offline and stored, which are then used to calculate the triple features in the evolutionary optimization. The proposed Evolutionary Trace Transform (ETT) is empirically evaluated on various images from fish database. It is shown that the proposed algorithm is very promising in that it is computationally efficient and considerably outperforms existing methods in literature

    A Localized Geometric-Distortion Resilient Digital Watermarking Scheme Using Two Kinds of Complementary Feature Points

    Get PDF
    With the rapid development of digital multimedia and internet techniques in the last few years, more and more digital images are being distributed to an ever-growing number of people for sharing, studying, or other purposes. Sharing images digitally is fast and cost-efficient thus highly desirable. However, most of those digital products are exposed without any protection. Thus, without authorization, such information can be easily transferred, copied, and tampered with by using digital multimedia editing software. Watermarking is a popular resolution to the strong need of copyright protection of digital multimedia. In the image forensics scenario, a digital watermark can be used as a tool to discriminate whether original content is tampered with or not. It is embedded on digital images as an invisible message and is used to demonstrate the proof by the owner. In this thesis, we propose a novel localized geometric-distortion resilient digital watermarking scheme to embed two invisible messages to images. Our proposed scheme utilizes two complementary watermarking techniques, namely, local circular region (LCR)-based techniques and block discrete cosine transform (DCT)-based techniques, to hide two pseudo-random binary sequences in two kinds of regions and extract these two sequences from their individual embedding regions. To this end, we use the histogram and mean statistically independent of the pixel position to embed one watermark in the LCRs, whose centers are the scale invariant feature transform (SIFT) feature points themselves that are robust against various affine transformations and common image processing attacks. This watermarking technique combines the advantages of SIFT feature point extraction, local histogram computing, and blind watermark embedding and extraction in the spatial domain to resist geometric distortions. We also use Watson’s DCT-based visual model to embed the other watermark in several rich textured 80×80 regions not covered by any embedding LCR. This watermarking technique combines the advantages of Harris feature point extraction, triangle tessellation and matching, the human visual system (HVS), the spread spectrum-based blind watermark embedding and extraction. The proposed technique then uses these combined features in a DCT domain to resist common image processing attacks and to reduce the watermark synchronization problem at the same time. These two techniques complement each other and therefore can resist geometric and common image processing attacks robustly. Our proposed watermarking approach is a robust watermarking technique that is capable of resisting geometric attacks, i.e., affine transformation (rotation, scaling, and translation) attacks and other common image processing (e.g., JPEG compression and filtering operations) attacks. It demonstrates more robustness and better performance as compared with some peer systems in the literature

    A hybrid scheme for authenticating scalable video codestreams

    Get PDF

    Trustworthy authentication on scalable surveillance video with background model support

    Get PDF
    H.264/SVC (Scalable Video Coding) codestreams, which consist of a single base layer and multiple enhancement layers, are designed for quality, spatial, and temporal scalabilities. They can be transmitted over networks of different bandwidths and seamlessly accessed by various terminal devices. With a huge amount of video surveillance and various devices becoming an integral part of the security infrastructure, the industry is currently starting to use the SVC standard to process digital video for surveillance applications such that clients with different network bandwidth connections and display capabilities can seamlessly access various SVC surveillance (sub)codestreams. In order to guarantee the trustworthiness and integrity of received SVC codestreams, engineers and researchers have proposed several authentication schemes to protect video data. However, existing algorithms cannot simultaneously satisfy both efficiency and robustness for SVC surveillance codestreams. Hence, in this article, a highly efficient and robust authentication scheme, named TrustSSV (Trust Scalable Surveillance Video), is proposed. Based on quality/spatial scalable characteristics of SVC codestreams, TrustSSV combines cryptographic and content-based authentication techniques to authenticate the base layer and enhancement layers, respectively. Based on temporal scalable characteristics of surveillance codestreams, TrustSSV extracts, updates, and authenticates foreground features for each access unit dynamically with background model support. Using SVC test sequences, our experimental results indicate that the scheme is able to distinguish between content-preserving and content-changing manipulations and to pinpoint tampered locations. Compared with existing schemes, the proposed scheme incurs very small computation and communication costs.</jats:p

    A QR Code Based Zero-Watermarking Scheme for Authentication of Medical Images in Teleradiology Cloud

    Get PDF
    Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)—Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu’s invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks

    Moment tabanlı normalleştirmeye dayalı sayısal görüntü damgalama yöntemleri

    Get PDF
    06.03.2018 tarihli ve 30352 sayılı Resmi Gazetede yayımlanan “Yükseköğretim Kanunu İle Bazı Kanun Ve Kanun Hükmünde Kararnamelerde Değişiklik Yapılması Hakkında Kanun” ile 18.06.2018 tarihli “Lisansüstü Tezlerin Elektronik Ortamda Toplanması, Düzenlenmesi ve Erişime Açılmasına İlişkin Yönerge” gereğince tam metin erişime açılmıştır.Bu çalışmada Moment Tabanlı Görüntü Normalleştirme kullanılarak iki boyutlu ayrık dalgacık ve karmaşık dalgacık dönüşüm uzaylarında dayanıklı sayısal görüntü damgalama algoritmaları geliştirilmiştir. Önerilen damgalama algoritmalarında normalleştirme işlemi geometrik bozunumlara karşı dayanıklılığı sağlarken, damganın dalgacık uzayında eklenmesi gürültü, doğrusal ve doğrusal olmayan filtreleme, JPEG sıkıştırması gibi saldırılara karşı damganın dayanıklılığını arttırmıştır. İnsan görme sisteminin özellikleri göz önünde bulundurularak, eklenen damganın hem algısal saydamlık hem de dayanıklılık gereksinimlerini aynı anda sağlaması başarılmıştır. Sunulan yöntem literatürde sıklıkla kullanılan iki yöntemle karşılaştırılmıştır. Simülasyon sonuçları, önerilen yöntemin, JPEG ve JPEG2000 sıkıştırması, çeşitli geometrik dönüşümler ve bazı görüntü işleme saldırılarına karşı her iki yöntemden de daha iyi sonuçlar verdiğini göstermiştir. Daha sonra, normalleştirmenin damgalama kapasitesi üzerindeki etkisi Moulin ve Mıhçak tarafından önerilen bilgi-teorisi tabanlı kapasite kestirim yöntemi kullanılarak ayrık kosinüs ve dalgacık uzaylarında araştırılmıştır. Kapasite analizi sonuçları, görüntünün dönüşümündeki sıfır değerli katsayıların sayısının kapasiteyi belirlediğini göstermiştir. Normalleştirme işlemi bir görüntünün dönüşümündeki sıfır değerli katsayı sayısını arttırdığından, damgalama algoritmalarında ön işlem olarak kullanıldığında daha iyi kapasite kestirim sonucu vermektedir. Bir görüntünün dalgacık dönüşümündeki sıfır değerli katsayı sayısı DCT dönüşümündekinden daha fazla olduğundan kapasite önemli olduğunda dalgacık dönüşümünün tercih edilmesi gereklidir.In this study, robust digital image watermarking algorithms in two dimensional discrete wavelet and complex wavelet domains were developed by using the moment based image normalization. In the proposed methods, while the normalization provides robustness against geometrical distortions, the fact that watermark is added in the wavelet domain achieves immunity for attacks such a noise, linear and non-linear filtering and JPEG compression. That the watermark meets transparency and robustness requirements simultaneously was obtained by taking the properties of the human visual system into account. The proposed method was compared to two commonly used digital image watermarking algorithms. Simulation results have shown that the proposed method gives better results that both of the commonly used algorithms in term of various geometrical distortions and several image processing attacks. Then, the effects of the normalization on watermarking capacity in discrete cosine and wavelet domains were investigated by using the information theory based capacity estimation method developed by Moulin and Mıhçak. The results of the capacity analysis have demonstrated that transform coefficients sparsity of an image determines the capacity. Since the normalization process increases transform coefficients sparsity of an image , it results in better capacity estimates when it is used as a preprocessing step watermarking algorithms. As the wavelet models capture sparsity better than the DCT models, wavelet transform must be preferred when capacity is the main concern
    corecore