81 research outputs found

    Enhancement of digital grayscale image watermarking using sparse matrix

    Get PDF
    Watermarking is a form of steganography that proved its worth in successfully protecting copyright information. It is the process of embedding data inside an audio or video or image message such that the embedded data is possible to be detected or extracted later. The core focus in watermarking techniques is their performance which is determined by imperceptibility along with robustness and capacity. These properties are often conflicting, which needs to accept some trade-offs between them. Despite the successes recorder in the area of digital watermarking, several challenges continue to persist particularly in the Areas of balancing these factors. This research aims to enhance the the processes in the watermarking technique for archieving imperceptibility with an acceptable balancing and enhance the security. The research proposed a new scheme using sparse matrix for improving the effectiveness of watermarked image using digital wavelet transform and inverse discrete wavelet transform to locate the best place and level in the image to embed the watermark. Sparse matrix is used to enhance the embedding process by selecting the proper coefficient. For more secure watermarking, additional encryption layer is utilized to increase the difficulty towards unauthorized extraction. The proposed technique generated the proper message size for each sub image based on the PSNR, which is used as an indicator for selecting the suitable level of embedding and for detecting the possibility of attacks. The proposed scheme improves watermarking quality by using the sparse matrix to select the appropriate coefficient for embedding. The experiments showed that the proposed scheme enhances 2.8479 dB of quality (PSNR) or equivalent to 5.3 % of improvements. The research proposed scheme achieved better PSNR in comparison with other research

    Information Analysis for Steganography and Steganalysis in 3D Polygonal Meshes

    Get PDF
    Information hiding, which embeds a watermark/message over a cover signal, has recently found extensive applications in, for example, copyright protection, content authentication and covert communication. It has been widely considered as an appealing technology to complement conventional cryptographic processes in the field of multimedia security by embedding information into the signal being protected. Generally, information hiding can be classified into two categories: steganography and watermarking. While steganography attempts to embed as much information as possible into a cover signal, watermarking tries to emphasize the robustness of the embedded information at the expense of embedding capacity. In contrast to information hiding, steganalysis aims at detecting whether a given medium has hidden message in it, and, if possible, recover that hidden message. It can be used to measure the security performance of information hiding techniques, meaning a steganalysis resistant steganographic/watermarking method should be imperceptible not only to Human Vision Systems (HVS), but also to intelligent analysis. As yet, 3D information hiding and steganalysis has received relatively less attention compared to image information hiding, despite the proliferation of 3D computer graphics models which are fairly promising information carriers. This thesis focuses on this relatively neglected research area and has the following primary objectives: 1) to investigate the trade-off between embedding capacity and distortion by considering the correlation between spatial and normal/curvature noise in triangle meshes; 2) to design satisfactory 3D steganographic algorithms, taking into account this trade-off; 3) to design robust 3D watermarking algorithms; 4) to propose a steganalysis framework for detecting the existence of the hidden information in 3D models and introduce a universal 3D steganalytic method under this framework. %and demonstrate the performance of the proposed steganalysis by testing it against six well-known 3D steganographic/watermarking methods. The thesis is organized as follows. Chapter 1 describes in detail the background relating to information hiding and steganalysis, as well as the research problems this thesis will be studying. Chapter 2 conducts a survey on the previous information hiding techniques for digital images, 3D models and other medium and also on image steganalysis algorithms. Motivated by the observation that the knowledge of the spatial accuracy of the mesh vertices does not easily translate into information related to the accuracy of other visually important mesh attributes such as normals, Chapters 3 and 4 investigate the impact of modifying vertex coordinates of 3D triangle models on the mesh normals. Chapter 3 presents the results of an empirical investigation, whereas Chapter 4 presents the results of a theoretical study. Based on these results, a high-capacity 3D steganographic algorithm capable of controlling embedding distortion is also presented in Chapter 4. In addition to normal information, several mesh interrogation, processing and rendering algorithms make direct or indirect use of curvature information. Motivated by this, Chapter 5 studies the relation between Discrete Gaussian Curvature (DGC) degradation and vertex coordinate modifications. Chapter 6 proposes a robust watermarking algorithm for 3D polygonal models, based on modifying the histogram of the distances from the model vertices to a point in 3D space. That point is determined by applying Principal Component Analysis (PCA) to the cover model. The use of PCA makes the watermarking method robust against common 3D operations, such as rotation, translation and vertex reordering. In addition, Chapter 6 develops a 3D specific steganalytic algorithm to detect the existence of the hidden messages embedded by one well-known watermarking method. By contrast, the focus of Chapter 7 will be on developing a 3D watermarking algorithm that is resistant to mesh editing or deformation attacks that change the global shape of the mesh. By adopting a framework which has been successfully developed for image steganalysis, Chapter 8 designs a 3D steganalysis method to detect the existence of messages hidden in 3D models with existing steganographic and watermarking algorithms. The efficiency of this steganalytic algorithm has been evaluated on five state-of-the-art 3D watermarking/steganographic methods. Moreover, being a universal steganalytic algorithm can be used as a benchmark for measuring the anti-steganalysis performance of other existing and most importantly future watermarking/steganographic algorithms. Chapter 9 concludes this thesis and also suggests some potential directions for future work

    Data Hiding with Deep Learning: A Survey Unifying Digital Watermarking and Steganography

    Full text link
    Data hiding is the process of embedding information into a noise-tolerant signal such as a piece of audio, video, or image. Digital watermarking is a form of data hiding where identifying data is robustly embedded so that it can resist tampering and be used to identify the original owners of the media. Steganography, another form of data hiding, embeds data for the purpose of secure and secret communication. This survey summarises recent developments in deep learning techniques for data hiding for the purposes of watermarking and steganography, categorising them based on model architectures and noise injection methods. The objective functions, evaluation metrics, and datasets used for training these data hiding models are comprehensively summarised. Finally, we propose and discuss possible future directions for research into deep data hiding techniques

    Response, Models, Race, and the Law

    Get PDF
    Capitalizing on recent advances in algorithmic sampling, The Race-Blind Future of Voting Rights explores the implications of the long-standing conservative dream of certified race neutrality in redistricting. Computers seem promising because they are excellent at not taking race into account—but computers only do what you tell them to do, and the rest of the authors’ apparatus for measuring minority electoral opportunity failed every check of robustness and numerical stability that we applied. How many opportunity districts are there in the current Texas state House plan? Their methods can give any answer from thirty-four to fifty-one, depending on invisible settings. But if we focus only on major technical flaws, we might miss the fundamental fact that race-blind districting would devastate minority political opportunity no matter how it is deployed, just due to the mathematics of single-member districts. In the end, the Article develops an extreme interpretation of a dubious idea proposed by Judge Easterbrook through an empirical study that is unsupported by the methods

    Lossless and low-cost integer-based lifting wavelet transform

    Get PDF
    Discrete wavelet transform (DWT) is a powerful tool for analyzing real-time signals, including aperiodic, irregular, noisy, and transient data, because of its capability to explore signals in both the frequency- and time-domain in different resolutions. For this reason, they are used extensively in a wide number of applications in image and signal processing. Despite the wide usage, the implementation of the wavelet transform is usually lossy or computationally complex, and it requires expensive hardware. However, in many applications, such as medical diagnosis, reversible data-hiding, and critical satellite data, lossless implementation of the wavelet transform is desirable. It is also important to have more hardware-friendly implementations due to its recent inclusion in signal processing modules in system-on-chips (SoCs). To address the need, this research work provides a generalized implementation of a wavelet transform using an integer-based lifting method to produce lossless and low-cost architecture while maintaining the performance close to the original wavelets. In order to achieve a general implementation method for all orthogonal and biorthogonal wavelets, the Daubechies wavelet family has been utilized at first since it is one of the most widely used wavelets and based on a systematic method of construction of compact support orthogonal wavelets. Though the first two phases of this work are for Daubechies wavelets, they can be generalized in order to apply to other wavelets as well. Subsequently, some techniques used in the primary works have been adopted and the critical issues for achieving general lossless implementation have solved to propose a general lossless method. The research work presented here can be divided into several phases. In the first phase, low-cost architectures of the Daubechies-4 (D4) and Daubechies-6 (D6) wavelets have been derived by applying the integer-polynomial mapping. A lifting architecture has been used which reduces the cost by a half compared to the conventional convolution-based approach. The application of integer-polynomial mapping (IPM) of the polynomial filter coefficient with a floating-point value further decreases the complexity and reduces the loss in signal reconstruction. Also, the “resource sharing” between lifting steps results in a further reduction in implementation costs and near-lossless data reconstruction. In the second phase, a completely lossless or error-free architecture has been proposed for the Daubechies-8 (D8) wavelet. Several lifting variants have been derived for the same wavelet, the integer mapping has been applied, and the best variant is determined in terms of performance, using entropy and transform coding gain. Then a theory has been derived regarding the impact of scaling steps on the transform coding gain (GT). The approach results in the lowest cost lossless architecture of the D8 in the literature, to the best of our knowledge. The proposed approach may be applied to other orthogonal wavelets, including biorthogonal ones to achieve higher performance. In the final phase, a general algorithm has been proposed to implement the original filter coefficients expressed by a polyphase matrix into a more efficient lifting structure. This is done by using modified factorization, so that the factorized polyphase matrix does not include the lossy scaling step like the conventional lifting method. This general technique has been applied on some widely used orthogonal and biorthogonal wavelets and its advantages have been discussed. Since the discrete wavelet transform is used in a vast number of applications, the proposed algorithms can be utilized in those cases to achieve lossless, low-cost, and hardware-friendly architectures

    Response, Models, Race, and the Law

    Get PDF
    Capitalizing on recent advances in algorithmic sampling, The Race-Blind Future of Voting Rights explores the implications of the long-standing conservative dream of certified race neutrality in redistricting. Computers seem promising because they are excellent at not taking race into account—but computers only do what you tell them to do, and the rest of the authors’ apparatus for measuring minority electoral opportunity failed every check of robustness and numerical stability that we applied. How many opportunity districts are there in the current Texas state House plan? Their methods can give any answer from thirty-four to fifty-one, depending on invisible settings. But if we focus only on major technical flaws, we might miss the fundamental fact that race-blind districting would devastate minority political opportunity no matter how it is deployed, just due to the mathematics of single-member districts. In the end, the Article develops an extreme interpretation of a dubious idea proposed by Judge Easterbrook through an empirical study that is unsupported by the methods
    • 

    corecore