1,151 research outputs found

    Perceptual Quality Evaluation of 3D Triangle Mesh: A Technical Review

    Full text link
    © 2018 IEEE. During mesh processing operations (e.g. simplifications, compression, and watermarking), a 3D triangle mesh is subject to various visible distortions on mesh surface which result in a need to estimate visual quality. The necessity of perceptual quality evaluation is already established, as in most cases, human beings are the end users of 3D meshes. To measure such kinds of distortions, the metrics that consider geometric measures integrating human visual system (HVS) is called perceptual quality metrics. In this paper, we direct an expansive study on 3D mesh quality evaluation mostly focusing on recently proposed perceptual based metrics. We limit our study on greyscale static mesh evaluation and attempt to figure out the most workable method for real-Time evaluation by making a quantitative comparison. This paper also discusses in detail how to evaluate objective metric's performance with existing subjective databases. In this work, we likewise research the utilization of the psychometric function to expel non-linearity between subjective and objective values. Finally, we draw a comparison among some selected quality metrics and it shows that curvature tensor based quality metrics predicts consistent result in terms of correlation

    A perceptual quality metric for 3D triangle meshes based on spatial pooling

    Full text link
    © 2018, Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature. In computer graphics, various processing operations are applied to 3D triangle meshes and these processes often involve distortions, which affect the visual quality of surface geometry. In this context, perceptual quality assessment of 3D triangle meshes has become a crucial issue. In this paper, we propose a new objective quality metric for assessing the visual difference between a reference mesh and a corresponding distorted mesh. Our analysis indicates that the overall quality of a distorted mesh is sensitive to the distortion distribution. The proposed metric is based on a spatial pooling strategy and statistical descriptors of the distortion distribution. We generate a perceptual distortion map for vertices in the reference mesh while taking into account the visual masking effect of the human visual system. The proposed metric extracts statistical descriptors from the distortion map as the feature vector to represent the overall mesh quality. With the feature vector as input, we adopt a support vector regression model to predict the mesh quality score.We validate the performance of our method with three publicly available databases, and the comparison with state-of-the-art metrics demonstrates the superiority of our method. Experimental results show that our proposed method achieves a high correlation between objective assessment and subjective scores

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Quality Measurements on Quantised Meshes

    Get PDF
    In computer graphics, triangle mesh has emerged as the ubiquitous shape rep- resentation for 3D modelling and visualisation applications. Triangle meshes, often undergo compression by specialised algorithms for the purposes of storage and trans- mission. During the compression processes, the coordinates of the vertices of the triangle meshes are quantised using fixed-point arithmetic. Potentially, that can alter the visual quality of the 3D model. Indeed, if the number of bits per vertex coordinate is too low, the mesh will be deemed by the user as visually too coarse as quantisation artifacts will become perceptible. Therefore, there is the need for the development of quality metrics that will enable us to predict the visual appearance of a triangle mesh at a given level of vertex coordinate quantisation

    Geo-Metric: {A} Perceptual Dataset of Distortions on Faces

    Get PDF

    Interactive inspection of complex multi-object industrial assemblies

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1016/j.cad.2016.06.005The use of virtual prototypes and digital models containing thousands of individual objects is commonplace in complex industrial applications like the cooperative design of huge ships. Designers are interested in selecting and editing specific sets of objects during the interactive inspection sessions. This is however not supported by standard visualization systems for huge models. In this paper we discuss in detail the concept of rendering front in multiresolution trees, their properties and the algorithms that construct the hierarchy and efficiently render it, applied to very complex CAD models, so that the model structure and the identities of objects are preserved. We also propose an algorithm for the interactive inspection of huge models which uses a rendering budget and supports selection of individual objects and sets of objects, displacement of the selected objects and real-time collision detection during these displacements. Our solution–based on the analysis of several existing view-dependent visualization schemes–uses a Hybrid Multiresolution Tree that mixes layers of exact geometry, simplified models and impostors, together with a time-critical, view-dependent algorithm and a Constrained Front. The algorithm has been successfully tested in real industrial environments; the models involved are presented and discussed in the paper.Peer ReviewedPostprint (author's final draft

    A new mesh visual quality metric using saliency weighting-based pooling strategy

    Full text link
    © 2018 Elsevier Inc. Several metrics have been proposed to assess the visual quality of 3D triangular meshes during the last decade. In this paper, we propose a mesh visual quality metric by integrating mesh saliency into mesh visual quality assessment. We use the Tensor-based Perceptual Distance Measure metric to estimate the local distortions for the mesh, and pool local distortions into a quality score using a saliency weighting-based pooling strategy. Three well-known mesh saliency detection methods are used to demonstrate the superiority and effectiveness of our metric. Experimental results show that our metric with any of three saliency maps performs better than state-of-the-art metrics on the LIRIS/EPFL general-purpose database. We generate a synthetic saliency map by assembling salient regions from individual saliency maps. Experimental results reveal that the synthetic saliency map achieves better performance than individual saliency maps, and the performance gain is closely correlated with the similarity between the individual saliency maps

    Visual saliency guided textured model simplification

    Get PDF
    Mesh geometry can be used to model both object shape and details. If texture maps are involved, it is common to let mesh geometry mainly model object shapes and let the texture maps model the most object details, optimising data size and complexity of an object. To support efficient object rendering and transmission, model simplification can be applied to reduce the modelling data. However, existing methods do not well consider how object features are jointly represented by mesh geometry and texture maps, having problems in identifying and preserving important features for simplified objects. To address this, we propose a visual saliency detection method for simplifying textured 3D models. We produce good simplification results by jointly processing mesh geometry and texture map to produce a unified saliency map for identifying visually important object features. Results show that our method offers a better object rendering quality than existing methods
    corecore