2,772 research outputs found

    On Micromechanical Parameter Identification With Integrated DIC and the Role of Accuracy in Kinematic Boundary Conditions

    Get PDF
    Integrated Digital Image Correlation (IDIC) is nowadays a well established full-field experimental procedure for reliable and accurate identification of material parameters. It is based on the correlation of a series of images captured during a mechanical experiment, that are matched by displacement fields derived from an underlying mechanical model. In recent studies, it has been shown that when the applied boundary conditions lie outside the employed field of view, IDIC suffers from inaccuracies. A typical example is a micromechanical parameter identification inside a Microstructural Volume Element (MVE), whereby images are usually obtained by electron microscopy or other microscopy techniques but the loads are applied at a much larger scale. For any IDIC model, MVE boundary conditions still need to be specified, and any deviation or fluctuation in these boundary conditions may significantly influence the quality of identification. Prescribing proper boundary conditions is generally a challenging task, because the MVE has no free boundary, and the boundary displacements are typically highly heterogeneous due to the underlying microstructure. The aim of this paper is therefore first to quantify the effects of errors in the prescribed boundary conditions on the accuracy of the identification in a systematic way. To this end, three kinds of mechanical tests, each for various levels of material contrast ratios and levels of image noise, are carried out by means of virtual experiments. For simplicity, an elastic compressible Neo-Hookean constitutive model under plane strain assumption is adopted. It is shown that a high level of detail is required in the applied boundary conditions. This motivates an improved boundary condition application approach, which considers constitutive material parameters as well as kinematic variables at the boundary of the entire MVE as degrees of freedom in...Comment: 37 pages, 25 figures, 2 tables, 2 algorithm

    An Unstructured Mesh Convergent Reaction-Diffusion Master Equation for Reversible Reactions

    Full text link
    The convergent reaction-diffusion master equation (CRDME) was recently developed to provide a lattice particle-based stochastic reaction-diffusion model that is a convergent approximation in the lattice spacing to an underlying spatially-continuous particle dynamics model. The CRDME was designed to be identical to the popular lattice reaction-diffusion master equation (RDME) model for systems with only linear reactions, while overcoming the RDME's loss of bimolecular reaction effects as the lattice spacing is taken to zero. In our original work we developed the CRDME to handle bimolecular association reactions on Cartesian grids. In this work we develop several extensions to the CRDME to facilitate the modeling of cellular processes within realistic biological domains. Foremost, we extend the CRDME to handle reversible bimolecular reactions on unstructured grids. Here we develop a generalized CRDME through discretization of the spatially continuous volume reactivity model, extending the CRDME to encompass a larger variety of particle-particle interactions. Finally, we conclude by examining several numerical examples to demonstrate the convergence and accuracy of the CRDME in approximating the volume reactivity model.Comment: 35 pages, 9 figures. Accepted, J. Comp. Phys. (2018

    Information Analysis for Steganography and Steganalysis in 3D Polygonal Meshes

    Get PDF
    Information hiding, which embeds a watermark/message over a cover signal, has recently found extensive applications in, for example, copyright protection, content authentication and covert communication. It has been widely considered as an appealing technology to complement conventional cryptographic processes in the field of multimedia security by embedding information into the signal being protected. Generally, information hiding can be classified into two categories: steganography and watermarking. While steganography attempts to embed as much information as possible into a cover signal, watermarking tries to emphasize the robustness of the embedded information at the expense of embedding capacity. In contrast to information hiding, steganalysis aims at detecting whether a given medium has hidden message in it, and, if possible, recover that hidden message. It can be used to measure the security performance of information hiding techniques, meaning a steganalysis resistant steganographic/watermarking method should be imperceptible not only to Human Vision Systems (HVS), but also to intelligent analysis. As yet, 3D information hiding and steganalysis has received relatively less attention compared to image information hiding, despite the proliferation of 3D computer graphics models which are fairly promising information carriers. This thesis focuses on this relatively neglected research area and has the following primary objectives: 1) to investigate the trade-off between embedding capacity and distortion by considering the correlation between spatial and normal/curvature noise in triangle meshes; 2) to design satisfactory 3D steganographic algorithms, taking into account this trade-off; 3) to design robust 3D watermarking algorithms; 4) to propose a steganalysis framework for detecting the existence of the hidden information in 3D models and introduce a universal 3D steganalytic method under this framework. %and demonstrate the performance of the proposed steganalysis by testing it against six well-known 3D steganographic/watermarking methods. The thesis is organized as follows. Chapter 1 describes in detail the background relating to information hiding and steganalysis, as well as the research problems this thesis will be studying. Chapter 2 conducts a survey on the previous information hiding techniques for digital images, 3D models and other medium and also on image steganalysis algorithms. Motivated by the observation that the knowledge of the spatial accuracy of the mesh vertices does not easily translate into information related to the accuracy of other visually important mesh attributes such as normals, Chapters 3 and 4 investigate the impact of modifying vertex coordinates of 3D triangle models on the mesh normals. Chapter 3 presents the results of an empirical investigation, whereas Chapter 4 presents the results of a theoretical study. Based on these results, a high-capacity 3D steganographic algorithm capable of controlling embedding distortion is also presented in Chapter 4. In addition to normal information, several mesh interrogation, processing and rendering algorithms make direct or indirect use of curvature information. Motivated by this, Chapter 5 studies the relation between Discrete Gaussian Curvature (DGC) degradation and vertex coordinate modifications. Chapter 6 proposes a robust watermarking algorithm for 3D polygonal models, based on modifying the histogram of the distances from the model vertices to a point in 3D space. That point is determined by applying Principal Component Analysis (PCA) to the cover model. The use of PCA makes the watermarking method robust against common 3D operations, such as rotation, translation and vertex reordering. In addition, Chapter 6 develops a 3D specific steganalytic algorithm to detect the existence of the hidden messages embedded by one well-known watermarking method. By contrast, the focus of Chapter 7 will be on developing a 3D watermarking algorithm that is resistant to mesh editing or deformation attacks that change the global shape of the mesh. By adopting a framework which has been successfully developed for image steganalysis, Chapter 8 designs a 3D steganalysis method to detect the existence of messages hidden in 3D models with existing steganographic and watermarking algorithms. The efficiency of this steganalytic algorithm has been evaluated on five state-of-the-art 3D watermarking/steganographic methods. Moreover, being a universal steganalytic algorithm can be used as a benchmark for measuring the anti-steganalysis performance of other existing and most importantly future watermarking/steganographic algorithms. Chapter 9 concludes this thesis and also suggests some potential directions for future work

    Perceptual Quality Evaluation of 3D Triangle Mesh: A Technical Review

    Full text link
    © 2018 IEEE. During mesh processing operations (e.g. simplifications, compression, and watermarking), a 3D triangle mesh is subject to various visible distortions on mesh surface which result in a need to estimate visual quality. The necessity of perceptual quality evaluation is already established, as in most cases, human beings are the end users of 3D meshes. To measure such kinds of distortions, the metrics that consider geometric measures integrating human visual system (HVS) is called perceptual quality metrics. In this paper, we direct an expansive study on 3D mesh quality evaluation mostly focusing on recently proposed perceptual based metrics. We limit our study on greyscale static mesh evaluation and attempt to figure out the most workable method for real-Time evaluation by making a quantitative comparison. This paper also discusses in detail how to evaluate objective metric's performance with existing subjective databases. In this work, we likewise research the utilization of the psychometric function to expel non-linearity between subjective and objective values. Finally, we draw a comparison among some selected quality metrics and it shows that curvature tensor based quality metrics predicts consistent result in terms of correlation

    Quality Measurements on Quantised Meshes

    Get PDF
    In computer graphics, triangle mesh has emerged as the ubiquitous shape rep- resentation for 3D modelling and visualisation applications. Triangle meshes, often undergo compression by specialised algorithms for the purposes of storage and trans- mission. During the compression processes, the coordinates of the vertices of the triangle meshes are quantised using fixed-point arithmetic. Potentially, that can alter the visual quality of the 3D model. Indeed, if the number of bits per vertex coordinate is too low, the mesh will be deemed by the user as visually too coarse as quantisation artifacts will become perceptible. Therefore, there is the need for the development of quality metrics that will enable us to predict the visual appearance of a triangle mesh at a given level of vertex coordinate quantisation
    corecore