149 research outputs found

    Study and Comparison of Surface Roughness Measurements

    No full text
    National audienceThis survey paper focus on recent researches whose goal is to optimize treatments on 3D meshes, thanks to a study of their surface features, and more precisely their roughness and saliency. Applications like watermarking or lossy compression can benefit from a precise roughness detection, to better hide the watermarks or quantize coarsely these areas, without altering visually the shape. Despite investigations on scale dependence leading to multi-scale approaches, an accurate roughness or pattern characterization is still lacking, but challenging for those treatments. We think there is still room for investigations that could benefit from the power of the wavelet analysis or the fractal models. Furthermore only few works are now able to differentiate roughness from saliency, though it is essential for faithfully simplifying or denoising a 3D mesh. Hence we have investigated roughness quantification methods for analog surfaces, in several domains of physics. Some roughness parameters used in these fields and the additionnal information they bring are finally studied, since we think an adaptation for 3D meshes could be beneficial

    Copyright Protection of 3D Digitized Artistic Sculptures by Adding Unique Local Inconspicuous Errors by Sculptors

    Get PDF
    In recent years, digitization of cultural heritage objects, for the purpose of creating virtual museums, is becoming increasingly popular. Moreover, cultural institutions use modern digitization methods to create three-dimensional (3D) models of objects of historical significance to form digital libraries and archives. This research aims to suggest a method for protecting these 3D models from abuse while making them available on the Internet. The proposed method was applied to a sculpture, an object of cultural heritage. It is based on the digitization of the sculpture altered by adding local clay details proposed by the sculptor and on sharing on the Internet a 3D model obtained by digitizing the sculpture with a built-in error. The clay details embedded in the sculpture are asymmetrical and discreet to be unnoticeable to an average observer. The original sculpture was also digitized and its 3D model created. The obtained 3D models were compared and the geometry deviation was measured to determine that the embedded error was invisible to an average observer and that the watermark can be extracted. The proposed method simultaneously protects the digitized image of the artwork while preserving its visual experience. Other methods cannot guarantee this

    Copyright Protection of 3D Digitized Artistic Sculptures by Adding Unique Local Inconspicuous Errors by Sculptors

    Get PDF
    In recent years, digitization of cultural heritage objects, for the purpose of creating virtual museums, is becoming increasingly popular. Moreover, cultural institutions use modern digitization methods to create three-dimensional (3D) models of objects of historical significance to form digital libraries and archives. This research aims to suggest a method for protecting these 3D models from abuse while making them available on the Internet. The proposed method was applied to a sculpture, an object of cultural heritage. It is based on the digitization of the sculpture altered by adding local clay details proposed by the sculptor and on sharing on the Internet a 3D model obtained by digitizing the sculpture with a built-in error. The clay details embedded in the sculpture are asymmetrical and discreet to be unnoticeable to an average observer. The original sculpture was also digitized and its 3D model created. The obtained 3D models were compared and the geometry deviation was measured to determine that the embedded error was invisible to an average observer and that the watermark can be extracted. The proposed method simultaneously protects the digitized image of the artwork while preserving its visual experience. Other methods cannot guarantee this

    Graph spectral domain blind watermarking

    Get PDF
    This paper proposes the first ever graph spectral domain blind watermarking algorithm. We explore the recently developed graph signal processing for spread-spectrum watermarking to authenticate the data recorded on non-Cartesian grids, such as sensor data, 3D point clouds, Lidar scans and mesh data. The choice of coefficients for embedding the watermark is driven by the model for minimisation embedding distortion and the robustness model. The distortion minimisation model is proposed to reduce the watermarking distortion by establishing the relationship between the error distortion using mean square error and the selected Graph Fourier coefficients to embed the watermark. The robustness model is proposed to improve the watermarking robustness against the attacks by establishing the relationship between the watermark extraction and the effect of the attacks, namely, additive noise and nodes data deletion. The proposed models were verified by the experimental results

    A new mesh visual quality metric using saliency weighting-based pooling strategy

    Full text link
    © 2018 Elsevier Inc. Several metrics have been proposed to assess the visual quality of 3D triangular meshes during the last decade. In this paper, we propose a mesh visual quality metric by integrating mesh saliency into mesh visual quality assessment. We use the Tensor-based Perceptual Distance Measure metric to estimate the local distortions for the mesh, and pool local distortions into a quality score using a saliency weighting-based pooling strategy. Three well-known mesh saliency detection methods are used to demonstrate the superiority and effectiveness of our metric. Experimental results show that our metric with any of three saliency maps performs better than state-of-the-art metrics on the LIRIS/EPFL general-purpose database. We generate a synthetic saliency map by assembling salient regions from individual saliency maps. Experimental results reveal that the synthetic saliency map achieves better performance than individual saliency maps, and the performance gain is closely correlated with the similarity between the individual saliency maps

    A Method for Determining the Shape Similarity of Complex Three-Dimensional Structures to Aid Decay Restoration and Digitization Error Correction

    Get PDF
    none5noThis paper introduces a new method for determining the shape similarity of complex three-dimensional (3D) mesh structures based on extracting a vector of important vertices, ordered according to a matrix of their most important geometrical and topological features. The correlation of ordered matrix vectors is combined with perceptual definition of salient regions in order to aid detection, distinguishing, measurement and restoration of real degradation and digitization errors. The case study is the digital 3D structure of the Camino Degli Angeli, in the Urbino’s Ducal Palace, acquired by the structure from motion (SfM) technique. In order to obtain an accurate, featured representation of the matching shape, the strong mesh processing computations are performed over the mesh surface while preserving real shape and geometric structure. In addition to perceptually based feature ranking, the new theoretical approach for ranking the evaluation criteria by employing neural networks (NNs) has been proposed to reduce the probability of deleting shape points, subject to optimization. Numerical analysis and simulations in combination with the developed virtual reality (VR) application serve as an assurance to restoration specialists providing visual and feature-based comparison of damaged parts with correct similar examples. The procedure also distinguishes mesh irregularities resulting from the photogrammetry process.openVasic I.; Quattrini R.; Pierdicca R.; Frontoni E.; Vasic B.Vasic, I.; Quattrini, R.; Pierdicca, R.; Frontoni, E.; Vasic, B

    Perceptual Quality Evaluation of 3D Triangle Mesh: A Technical Review

    Full text link
    © 2018 IEEE. During mesh processing operations (e.g. simplifications, compression, and watermarking), a 3D triangle mesh is subject to various visible distortions on mesh surface which result in a need to estimate visual quality. The necessity of perceptual quality evaluation is already established, as in most cases, human beings are the end users of 3D meshes. To measure such kinds of distortions, the metrics that consider geometric measures integrating human visual system (HVS) is called perceptual quality metrics. In this paper, we direct an expansive study on 3D mesh quality evaluation mostly focusing on recently proposed perceptual based metrics. We limit our study on greyscale static mesh evaluation and attempt to figure out the most workable method for real-Time evaluation by making a quantitative comparison. This paper also discusses in detail how to evaluate objective metric's performance with existing subjective databases. In this work, we likewise research the utilization of the psychometric function to expel non-linearity between subjective and objective values. Finally, we draw a comparison among some selected quality metrics and it shows that curvature tensor based quality metrics predicts consistent result in terms of correlation

    Information Analysis for Steganography and Steganalysis in 3D Polygonal Meshes

    Get PDF
    Information hiding, which embeds a watermark/message over a cover signal, has recently found extensive applications in, for example, copyright protection, content authentication and covert communication. It has been widely considered as an appealing technology to complement conventional cryptographic processes in the field of multimedia security by embedding information into the signal being protected. Generally, information hiding can be classified into two categories: steganography and watermarking. While steganography attempts to embed as much information as possible into a cover signal, watermarking tries to emphasize the robustness of the embedded information at the expense of embedding capacity. In contrast to information hiding, steganalysis aims at detecting whether a given medium has hidden message in it, and, if possible, recover that hidden message. It can be used to measure the security performance of information hiding techniques, meaning a steganalysis resistant steganographic/watermarking method should be imperceptible not only to Human Vision Systems (HVS), but also to intelligent analysis. As yet, 3D information hiding and steganalysis has received relatively less attention compared to image information hiding, despite the proliferation of 3D computer graphics models which are fairly promising information carriers. This thesis focuses on this relatively neglected research area and has the following primary objectives: 1) to investigate the trade-off between embedding capacity and distortion by considering the correlation between spatial and normal/curvature noise in triangle meshes; 2) to design satisfactory 3D steganographic algorithms, taking into account this trade-off; 3) to design robust 3D watermarking algorithms; 4) to propose a steganalysis framework for detecting the existence of the hidden information in 3D models and introduce a universal 3D steganalytic method under this framework. %and demonstrate the performance of the proposed steganalysis by testing it against six well-known 3D steganographic/watermarking methods. The thesis is organized as follows. Chapter 1 describes in detail the background relating to information hiding and steganalysis, as well as the research problems this thesis will be studying. Chapter 2 conducts a survey on the previous information hiding techniques for digital images, 3D models and other medium and also on image steganalysis algorithms. Motivated by the observation that the knowledge of the spatial accuracy of the mesh vertices does not easily translate into information related to the accuracy of other visually important mesh attributes such as normals, Chapters 3 and 4 investigate the impact of modifying vertex coordinates of 3D triangle models on the mesh normals. Chapter 3 presents the results of an empirical investigation, whereas Chapter 4 presents the results of a theoretical study. Based on these results, a high-capacity 3D steganographic algorithm capable of controlling embedding distortion is also presented in Chapter 4. In addition to normal information, several mesh interrogation, processing and rendering algorithms make direct or indirect use of curvature information. Motivated by this, Chapter 5 studies the relation between Discrete Gaussian Curvature (DGC) degradation and vertex coordinate modifications. Chapter 6 proposes a robust watermarking algorithm for 3D polygonal models, based on modifying the histogram of the distances from the model vertices to a point in 3D space. That point is determined by applying Principal Component Analysis (PCA) to the cover model. The use of PCA makes the watermarking method robust against common 3D operations, such as rotation, translation and vertex reordering. In addition, Chapter 6 develops a 3D specific steganalytic algorithm to detect the existence of the hidden messages embedded by one well-known watermarking method. By contrast, the focus of Chapter 7 will be on developing a 3D watermarking algorithm that is resistant to mesh editing or deformation attacks that change the global shape of the mesh. By adopting a framework which has been successfully developed for image steganalysis, Chapter 8 designs a 3D steganalysis method to detect the existence of messages hidden in 3D models with existing steganographic and watermarking algorithms. The efficiency of this steganalytic algorithm has been evaluated on five state-of-the-art 3D watermarking/steganographic methods. Moreover, being a universal steganalytic algorithm can be used as a benchmark for measuring the anti-steganalysis performance of other existing and most importantly future watermarking/steganographic algorithms. Chapter 9 concludes this thesis and also suggests some potential directions for future work

    Ordered Statistics Vertex Extraction and Tracing Algorithm (OSVETA)

    Full text link
    We propose an algorithm for identifying vertices from three dimensional (3D) meshes that are most important for a geometric shape creation. Extracting such a set of vertices from a 3D mesh is important in applications such as digital watermarking, but also as a component of optimization and triangulation. In the first step, the Ordered Statistics Vertex Extraction and Tracing Algorithm (OSVETA) estimates precisely the local curvature, and most important topological features of mesh geometry. Using the vertex geometric importance ranking, the algorithm traces and extracts a vector of vertices, ordered by decreasing index of importance.Comment: Accepted for publishing and Copyright transfered to Advances in Electrical and Computer Engineering, November 23th 201
    • …
    corecore