262 research outputs found

    Topology-preserving watermarking of vector graphics

    Get PDF
    Watermarking techniques for vector graphics dislocate vertices in order to embed imperceptible, yet detectable, statistical features into the input data. The embedding process may result in a change of the topology of the input data, e.g., by introducing self-intersections, which is undesirable or even disastrous for many applications. In this paper we present a watermarking framework for two-dimensional vector graphics that employs conventional watermarking techniques but still provides the guarantee that the topology of the input data is preserved. The geometric part of this framework computes so-called maximum perturbation regions (MPR) of vertices. We propose two efficient algorithms to compute MPRs based on Voronoi diagrams and constrained triangulations. Furthermore, we present two algorithms to conditionally correct the watermarked data in order to increase the watermark embedding capacity and still guarantee topological correctness. While we focus on the watermarking of input formed by straight-line segments, one of our approaches can also be extended to circular arcs. We conclude the paper by demonstrating and analyzing the applicability of our framework in conjunction with two well-known watermarking techniques

    A shape-preserving data embedding algorithm for NURBS curves and surfaces

    Full text link

    Information Analysis for Steganography and Steganalysis in 3D Polygonal Meshes

    Get PDF
    Information hiding, which embeds a watermark/message over a cover signal, has recently found extensive applications in, for example, copyright protection, content authentication and covert communication. It has been widely considered as an appealing technology to complement conventional cryptographic processes in the field of multimedia security by embedding information into the signal being protected. Generally, information hiding can be classified into two categories: steganography and watermarking. While steganography attempts to embed as much information as possible into a cover signal, watermarking tries to emphasize the robustness of the embedded information at the expense of embedding capacity. In contrast to information hiding, steganalysis aims at detecting whether a given medium has hidden message in it, and, if possible, recover that hidden message. It can be used to measure the security performance of information hiding techniques, meaning a steganalysis resistant steganographic/watermarking method should be imperceptible not only to Human Vision Systems (HVS), but also to intelligent analysis. As yet, 3D information hiding and steganalysis has received relatively less attention compared to image information hiding, despite the proliferation of 3D computer graphics models which are fairly promising information carriers. This thesis focuses on this relatively neglected research area and has the following primary objectives: 1) to investigate the trade-off between embedding capacity and distortion by considering the correlation between spatial and normal/curvature noise in triangle meshes; 2) to design satisfactory 3D steganographic algorithms, taking into account this trade-off; 3) to design robust 3D watermarking algorithms; 4) to propose a steganalysis framework for detecting the existence of the hidden information in 3D models and introduce a universal 3D steganalytic method under this framework. %and demonstrate the performance of the proposed steganalysis by testing it against six well-known 3D steganographic/watermarking methods. The thesis is organized as follows. Chapter 1 describes in detail the background relating to information hiding and steganalysis, as well as the research problems this thesis will be studying. Chapter 2 conducts a survey on the previous information hiding techniques for digital images, 3D models and other medium and also on image steganalysis algorithms. Motivated by the observation that the knowledge of the spatial accuracy of the mesh vertices does not easily translate into information related to the accuracy of other visually important mesh attributes such as normals, Chapters 3 and 4 investigate the impact of modifying vertex coordinates of 3D triangle models on the mesh normals. Chapter 3 presents the results of an empirical investigation, whereas Chapter 4 presents the results of a theoretical study. Based on these results, a high-capacity 3D steganographic algorithm capable of controlling embedding distortion is also presented in Chapter 4. In addition to normal information, several mesh interrogation, processing and rendering algorithms make direct or indirect use of curvature information. Motivated by this, Chapter 5 studies the relation between Discrete Gaussian Curvature (DGC) degradation and vertex coordinate modifications. Chapter 6 proposes a robust watermarking algorithm for 3D polygonal models, based on modifying the histogram of the distances from the model vertices to a point in 3D space. That point is determined by applying Principal Component Analysis (PCA) to the cover model. The use of PCA makes the watermarking method robust against common 3D operations, such as rotation, translation and vertex reordering. In addition, Chapter 6 develops a 3D specific steganalytic algorithm to detect the existence of the hidden messages embedded by one well-known watermarking method. By contrast, the focus of Chapter 7 will be on developing a 3D watermarking algorithm that is resistant to mesh editing or deformation attacks that change the global shape of the mesh. By adopting a framework which has been successfully developed for image steganalysis, Chapter 8 designs a 3D steganalysis method to detect the existence of messages hidden in 3D models with existing steganographic and watermarking algorithms. The efficiency of this steganalytic algorithm has been evaluated on five state-of-the-art 3D watermarking/steganographic methods. Moreover, being a universal steganalytic algorithm can be used as a benchmark for measuring the anti-steganalysis performance of other existing and most importantly future watermarking/steganographic algorithms. Chapter 9 concludes this thesis and also suggests some potential directions for future work

    Simplification Resilient LDPC-Coded Sparse-QIM Watermarking for 3D-Meshes

    Full text link
    We propose a blind watermarking scheme for 3-D meshes which combines sparse quantization index modulation (QIM) with deletion correction codes. The QIM operates on the vertices in rough concave regions of the surface thus ensuring impeccability, while the deletion correction code recovers the data hidden in the vertices which is removed by mesh optimization and/or simplification. The proposed scheme offers two orders of magnitude better performance in terms of recovered watermark bit error rate compared to the existing schemes of similar payloads and fidelity constraints.Comment: Submitted, revised and Copyright transfered to IEEE Transactions on Multimedia, October 9th 201

    Texture Synthesis for Mobile Data Communications

    Get PDF
    A digital camera mounted on a mobile phone is utilized as a data input device to obtain embedded data by analyzing the pattern of an image code such as a 2D bar code. This article proposes a new type of image coding method using texture image synthesis. Regularly arranged dotted-pattern is first painted with colors picked out from a texture sample, for having features corresponding to embedded data. Our texture synthesis technique then camouflages the dotted-patternusing the same texture sample while preserving the qualitycomparable to that of existing synthesis techniques. The texturedcode provides the conventional bar code with an aesthetic appealand is used for tagging data onto real texture objects, which canform a basis for ubiquitous mobile data communications. Thistechnical approach has the potential to explore new applicationfields of example-based, computer-generated texture images

    Ordered Statistics Vertex Extraction and Tracing Algorithm (OSVETA)

    Full text link
    We propose an algorithm for identifying vertices from three dimensional (3D) meshes that are most important for a geometric shape creation. Extracting such a set of vertices from a 3D mesh is important in applications such as digital watermarking, but also as a component of optimization and triangulation. In the first step, the Ordered Statistics Vertex Extraction and Tracing Algorithm (OSVETA) estimates precisely the local curvature, and most important topological features of mesh geometry. Using the vertex geometric importance ranking, the algorithm traces and extracts a vector of vertices, ordered by decreasing index of importance.Comment: Accepted for publishing and Copyright transfered to Advances in Electrical and Computer Engineering, November 23th 201

    Selection of robust features for the Cover Source Mismatch problem in 3D steganalysis

    Get PDF
    This paper introduces a novel method for extracting sets of feature from 3D objects characterising a robust stegan- alyzer. Specifically, the proposed steganalyzer should mitigate the Cover Source Mismatch (CSM) paradigm. A steganalyzer is considered as a classifier aiming to identify separately cover and stego objects. A steganalyzer behaves as a classifier by considering a set of features extracted from cover stego pairs of 3D objects as inputs during the training stage. However, during the testing stage, the steganalyzer would have to identify whether specific information was hidden in a set of 3D objects which can be different from those used during the training. Addressing the CSM paradigm corresponds to testing the generalization ability of the steganalyzer when introducing distortions in the cover objects before hiding information through steganography. Our method aims to select those 3D features that model best the changes introduced in objects by steganography or information hiding and moreover they are able to generalize for different objects, not present in the training set. The proposed robust steganalysis approach is tested when considering changes in 3D objects such as those produced by mesh simplification and additive noise. The results obtained from this study show that the steganalyzers trained with the selected set of robust features achieve better detection accuracy of the changes embedded in the objects, when compared to other sets of features
    corecore