5,178 research outputs found

    Screen-based 3D Subjective Experiment Software

    Full text link
    Recently, widespread 3D graphics (e.g., point clouds and meshes) have drawn considerable efforts from academia and industry to assess their perceptual quality by conducting subjective experiments. However, lacking a handy software for 3D subjective experiments complicates the construction of 3D graphics quality assessment datasets, thus hindering the prosperity of relevant fields. In this paper, we develop a powerful platform with which users can flexibly design their 3D subjective methodologies and build high-quality datasets, easing a broad spectrum of 3D graphics subjective quality study. To accurately illustrate the perceptual quality differences of 3D stimuli, our software can simultaneously render the source stimulus and impaired stimulus and allows both stimuli to respond synchronously to viewer interactions. Compared with amateur 3D visualization tool-based or image/video rendering-based schemes, our approach embodies typical 3D applications while minimizing cognitive overload during subjective experiments. We organized a subjective experiment involving 40 participants to verify the validity of the proposed software. Experimental analyses demonstrate that subjective tests on our software can produce reasonable subjective quality scores of 3D models. All resources in this paper can be found at https://openi.pcl.ac.cn/OpenDatasets/3DQA.Comment: Accepted to ACM Multimedia 202

    Reduced-reference quality assessment of point clouds via content-oriented saliency projection

    Get PDF
    Many dense 3D point clouds have been exploited to represent visual objects instead of traditional images or videos. To evaluate the perceptual quality of various point clouds, in this letter, we propose a novel and efficient Reduced-Reference quality metric for point clouds, which is based on Content-oriented sAliency Projection (RR-CAP). Specifically, we make the first attempt to simplify reference and distorted point clouds into projected saliency maps with a downsampling operation. Through this process, we tackle the issue of transmitting large-volume original point clouds to end-users for quality assessment. Then, motivated by the characteristics of the human visual system (HVS), the objective quality scores of distorted point clouds are produced by combining content-oriented similarity and statistical correlation measurements. Finally, extensive experiments are conducted on SJTU-PCQA and WPC databases. The experiment results demonstrate that our proposed algorithm outperforms existing reduced-reference and no-reference quality metrics, and significantly reduces the performance gap between state-of-the-art full-reference quality assessment methods. In addition, we show the performance variation of each proposed technical component by ablation tests

    Dynamic hypergraph convolutional network for no-reference point cloud quality assessment

    Get PDF
    With the rapid advancement of three-dimensional (3D) sensing technology, point cloud has emerged as one of the most important approaches for representing 3D data. However, quality degradation inevitably occurs during the acquisition, transmission, and process of point clouds. Therefore, point cloud quality assessment (PCQA) with automatic visual quality perception is particularly critical. In the literature, the graph convolutional networks (GCNs) have achieved certain performance in point cloud-related tasks. However, they cannot fully characterize the nonlinear high-order relationship of such complex data. In this paper, we propose a novel no-reference (NR) PCQA method with hypergraph learning. Specifically, a dynamic hypergraph convolutional network (DHCN) composing of a projected image encoder, a point group encoder, a dynamic hypergraph generator, and a perceptual quality predictor, is devised. First, a projected image encoder and a point group encoder are used to extract feature representations from projected images and point groups, respectively. Then, using the feature representations obtained by the two encoders, dynamic hypergraphs are generated during each iteration, aiming to constantly update the interactive information between the vertices of hypergraphs. Finally, we design the perceptual quality predictor to conduct quality reasoning on the generated hypergraphs. By leveraging the interactive information among hypergraph vertices, feature representations are well aggregated, resulting in a notable improvement in the accuracy of quality pediction. Experimental results on several point cloud quality assessment databases demonstrate that our proposed DHCN can achieve state-of-the-art performance. The code will be available at: https://github.com/chenwuwq/DHCN

    Subjective quality evaluation of Point Clouds using remote testing

    Get PDF
    Subjective quality assessment serves as a method to evaluate the perceptual quality of 3D point clouds. These evaluations can be conducted using lab-based or remote or crowdsourcing tests. The lab-based tests are time-consuming and less cost-effective. As an alternative, remote or crowd tests can be used, offering a time and cost-friendly approach. Remote testing enables larger and more diverse participant pools. However, this raises the question of its applicability due to variability in participants' display devices and environments for the evaluation of the point cloud. In this paper, the focus is on investigating the applicability of remote testing by using the Absolute Category Rating (ACR) test method for assessing the subjective quality of point clouds in different tests. We compare the results of lab and remote tests by replicating lab-based tests. In the first test, we assess the subjective quality of a static point cloud geometry for two different types of geometrical degradations, namely Gaussian noise, and octree-pruning. In the second test, we compare the performance of two different compression methods (G-PCC and V-PCC) to assess the subjective quality of coloured point cloud videos. Based on the results obtained using correlation and Standard deviation of Opinion Scores (SOS) analysis, the remote testing paradigm can be used for evaluating point clouds

    Augmented Reality-based Feedback for Technician-in-the-loop C-arm Repositioning

    Full text link
    Interventional C-arm imaging is crucial to percutaneous orthopedic procedures as it enables the surgeon to monitor the progress of surgery on the anatomy level. Minimally invasive interventions require repeated acquisition of X-ray images from different anatomical views to verify tool placement. Achieving and reproducing these views often comes at the cost of increased surgical time and radiation dose to both patient and staff. This work proposes a marker-free "technician-in-the-loop" Augmented Reality (AR) solution for C-arm repositioning. The X-ray technician operating the C-arm interventionally is equipped with a head-mounted display capable of recording desired C-arm poses in 3D via an integrated infrared sensor. For C-arm repositioning to a particular target view, the recorded C-arm pose is restored as a virtual object and visualized in an AR environment, serving as a perceptual reference for the technician. We conduct experiments in a setting simulating orthopedic trauma surgery. Our proof-of-principle findings indicate that the proposed system can decrease the 2.76 X-ray images required per desired view down to zero, suggesting substantial reductions of radiation dose during C-arm repositioning. The proposed AR solution is a first step towards facilitating communication between the surgeon and the surgical staff, improving the quality of surgical image acquisition, and enabling context-aware guidance for surgery rooms of the future. The concept of technician-in-the-loop design will become relevant to various interventions considering the expected advancements of sensing and wearable computing in the near future

    Evaluating Point Cloud Quality via Transformational Complexity

    Full text link
    Full-reference point cloud quality assessment (FR-PCQA) aims to infer the quality of distorted point clouds with available references. Merging the research of cognitive science and intuition of the human visual system (HVS), the difference between the expected perceptual result and the practical perception reproduction in the visual center of the cerebral cortex indicates the subjective quality degradation. Therefore in this paper, we try to derive the point cloud quality by measuring the complexity of transforming the distorted point cloud back to its reference, which in practice can be approximated by the code length of one point cloud when the other is given. For this purpose, we first segment the reference and the distorted point cloud into a series of local patch pairs based on one 3D Voronoi diagram. Next, motivated by the predictive coding theory, we utilize one space-aware vector autoregressive (SA-VAR) model to encode the geometry and color channels of each reference patch in cases with and without the distorted patch, respectively. Specifically, supposing that the residual errors follow the multi-variate Gaussian distributions, we calculate the self-complexity of the reference and the transformational complexity between the reference and the distorted sample via covariance matrices. Besides the complexity terms, the prediction terms generated by SA-VAR are introduced as one auxiliary feature to promote the final quality prediction. Extensive experiments on five public point cloud quality databases demonstrate that the transformational complexity based distortion metric (TCDM) produces state-of-the-art (SOTA) results, and ablation studies have further shown that our metric can be generalized to various scenarios with consistent performance by examining its key modules and parameters

    GMS-3DQA: Projection-based Grid Mini-patch Sampling for 3D Model Quality Assessment

    Full text link
    Nowadays, most 3D model quality assessment (3DQA) methods have been aimed at improving performance. However, little attention has been paid to the computational cost and inference time required for practical applications. Model-based 3DQA methods extract features directly from the 3D models, which are characterized by their high degree of complexity. As a result, many researchers are inclined towards utilizing projection-based 3DQA methods. Nevertheless, previous projection-based 3DQA methods directly extract features from multi-projections to ensure quality prediction accuracy, which calls for more resource consumption and inevitably leads to inefficiency. Thus in this paper, we address this challenge by proposing a no-reference (NR) projection-based \textit{\underline{G}rid \underline{M}ini-patch \underline{S}ampling \underline{3D} Model \underline{Q}uality \underline{A}ssessment (GMS-3DQA)} method. The projection images are rendered from six perpendicular viewpoints of the 3D model to cover sufficient quality information. To reduce redundancy and inference resources, we propose a multi-projection grid mini-patch sampling strategy (MP-GMS), which samples grid mini-patches from the multi-projections and forms the sampled grid mini-patches into one quality mini-patch map (QMM). The Swin-Transformer tiny backbone is then used to extract quality-aware features from the QMMs. The experimental results show that the proposed GMS-3DQA outperforms existing state-of-the-art NR-3DQA methods on the point cloud quality assessment databases. The efficiency analysis reveals that the proposed GMS-3DQA requires far less computational resources and inference time than other 3DQA competitors. The code will be available at https://github.com/zzc-1998/GMS-3DQA
    corecore