411 research outputs found

    A new mesh visual quality metric using saliency weighting-based pooling strategy

    Full text link
    © 2018 Elsevier Inc. Several metrics have been proposed to assess the visual quality of 3D triangular meshes during the last decade. In this paper, we propose a mesh visual quality metric by integrating mesh saliency into mesh visual quality assessment. We use the Tensor-based Perceptual Distance Measure metric to estimate the local distortions for the mesh, and pool local distortions into a quality score using a saliency weighting-based pooling strategy. Three well-known mesh saliency detection methods are used to demonstrate the superiority and effectiveness of our metric. Experimental results show that our metric with any of three saliency maps performs better than state-of-the-art metrics on the LIRIS/EPFL general-purpose database. We generate a synthetic saliency map by assembling salient regions from individual saliency maps. Experimental results reveal that the synthetic saliency map achieves better performance than individual saliency maps, and the performance gain is closely correlated with the similarity between the individual saliency maps

    Perceptual Quality Evaluation of 3D Triangle Mesh: A Technical Review

    Full text link
    © 2018 IEEE. During mesh processing operations (e.g. simplifications, compression, and watermarking), a 3D triangle mesh is subject to various visible distortions on mesh surface which result in a need to estimate visual quality. The necessity of perceptual quality evaluation is already established, as in most cases, human beings are the end users of 3D meshes. To measure such kinds of distortions, the metrics that consider geometric measures integrating human visual system (HVS) is called perceptual quality metrics. In this paper, we direct an expansive study on 3D mesh quality evaluation mostly focusing on recently proposed perceptual based metrics. We limit our study on greyscale static mesh evaluation and attempt to figure out the most workable method for real-Time evaluation by making a quantitative comparison. This paper also discusses in detail how to evaluate objective metric's performance with existing subjective databases. In this work, we likewise research the utilization of the psychometric function to expel non-linearity between subjective and objective values. Finally, we draw a comparison among some selected quality metrics and it shows that curvature tensor based quality metrics predicts consistent result in terms of correlation

    Generating detailed saliency maps using model-agnostic methods

    Full text link
    The emerging field of Explainable Artificial Intelligence focuses on researching methods of explaining the decision making processes of complex machine learning models. In the field of explainability for Computer Vision, explanations are provided as saliency maps, which visualize the importance of individual pixels of the input w.r.t. the model's prediction. In this work we focus on a perturbation-based, model-agnostic explainability method called RISE, elaborate on observed shortcomings of its grid-based approach and propose two modifications: replacement of square occlusions with convex polygonal occlusions based on cells of a Voronoi mesh and addition of an informativeness guarantee to the occlusion mask generator. These modifications, collectively called VRISE (Voronoi-RISE), are meant to, respectively, improve the accuracy of maps generated using large occlusions and accelerate convergence of saliency maps in cases where sampling density is either very low or very high. We perform a quantitative comparison of accuracy of saliency maps produced by VRISE and RISE on the validation split of ILSVRC2012, using a saliency-guided content insertion/deletion metric and a localization metric based on bounding boxes. Additionally, we explore the space of configurable occlusion pattern parameters to better understand their influence on saliency maps produced by RISE and VRISE. We also describe and demonstrate two effects observed over the course of experimentation, arising from the random sampling approach of RISE: "feature slicing" and "saliency misattribution". Our results show that convex polygonal occlusions yield more accurate maps for coarse occlusion meshes and multi-object images, but improvement is not guaranteed in other cases. The informativeness guarantee is shown to increase the convergence rate without incurring a significant computational overhead.Comment: 85 pages, 70 figures, Master's thesis, defended on 2021-12-23 (Gdansk University of Technology
    corecore