4,358 research outputs found

    Study of Subjective and Objective Quality Evaluation of 3D Point Cloud Data by the JPEG Committee

    Full text link
    The SC29/WG1 (JPEG) Committee within ISO/IEC is currently working on developing standards for the storage, compression and transmission of 3D point cloud information. To support the creation of these standards, the committee has created a database of 3D point clouds representing various quality levels and use-cases and examined a range of 2D and 3D objective quality measures. The examined quality measures are correlated with subjective judgments for a number of compression levels. In this paper we describe the database created, tests performed and key observations on the problems of 3D point cloud quality assessment

    A Novel Point Cloud Compression Algorithm for Vehicle Recognition Using Boundary Extraction

    Get PDF
    Recently, research on the hardware system for generating point cloud data through 3D LiDAR scanning has improved, which has important applications in autonomous driving and 3D reconstruction. However, point cloud data may contain defects such as duplicate points, redundant points, and an unordered mass of points, which put higher demands on the performance of hardware systems for processing data. Simplifying and compressing point cloud data can improve recognition speed in subsequent processes. This paper studies a novel algorithm for identifying vehicles in the environment using 3D LiDAR to obtain point cloud data. The point cloud compression method based on the nearest neighbor point and boundary extraction from octree voxels center points is applied to the point cloud data, followed by the vehicle point cloud identification algorithm based on image mapping for vehicle recognition. The proposed algorithm is tested using the KITTI dataset, and the results show improved accuracy compared to other methods

    Point cloud data compression

    Get PDF
    The rapid growth in the popularity of Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) experiences have resulted in an exponential surge of three-dimensional data. Point clouds have emerged as a commonly employed representation for capturing and visualizing three-dimensional data in these environments. Consequently, there has been a substantial research effort dedicated to developing efficient compression algorithms for point cloud data. This Master's thesis aims to investigate the current state-of-the-art lossless point cloud geometry compression techniques, explore some of these techniques in more detail and then propose improvements and/or extensions to enhance them and provide directions for future work on this topic

    Aggressive saliency-aware point cloud compression

    Full text link
    The increasing demand for accurate representations of 3D scenes, combined with immersive technologies has led point clouds to extensive popularity. However, quality point clouds require a large amount of data and therefore the need for compression methods is imperative. In this paper, we present a novel, geometry-based, end-to-end compression scheme, that combines information on the geometrical features of the point cloud and the user's position, achieving remarkable results for aggressive compression schemes demanding very small bit rates. After separating visible and non-visible points, four saliency maps are calculated, utilizing the point cloud's geometry and distance from the user, the visibility information, and the user's focus point. A combination of these maps results in a final saliency map, indicating the overall significance of each point and therefore quantizing different regions with a different number of bits during the encoding process. The decoder reconstructs the point cloud making use of delta coordinates and solving a sparse linear system. Evaluation studies and comparisons with the geometry-based point cloud compression (G-PCC) algorithm by the Moving Picture Experts Group (MPEG), carried out for a variety of point clouds, demonstrate that the proposed method achieves significantly better results for small bit rates

    Evaluating Point Cloud Quality via Transformational Complexity

    Full text link
    Full-reference point cloud quality assessment (FR-PCQA) aims to infer the quality of distorted point clouds with available references. Merging the research of cognitive science and intuition of the human visual system (HVS), the difference between the expected perceptual result and the practical perception reproduction in the visual center of the cerebral cortex indicates the subjective quality degradation. Therefore in this paper, we try to derive the point cloud quality by measuring the complexity of transforming the distorted point cloud back to its reference, which in practice can be approximated by the code length of one point cloud when the other is given. For this purpose, we first segment the reference and the distorted point cloud into a series of local patch pairs based on one 3D Voronoi diagram. Next, motivated by the predictive coding theory, we utilize one space-aware vector autoregressive (SA-VAR) model to encode the geometry and color channels of each reference patch in cases with and without the distorted patch, respectively. Specifically, supposing that the residual errors follow the multi-variate Gaussian distributions, we calculate the self-complexity of the reference and the transformational complexity between the reference and the distorted sample via covariance matrices. Besides the complexity terms, the prediction terms generated by SA-VAR are introduced as one auxiliary feature to promote the final quality prediction. Extensive experiments on five public point cloud quality databases demonstrate that the transformational complexity based distortion metric (TCDM) produces state-of-the-art (SOTA) results, and ablation studies have further shown that our metric can be generalized to various scenarios with consistent performance by examining its key modules and parameters

    Characterization of the response of quasi-periodic masonry : geometrical investigation, homogenization and application to the GuimarĂŁes castle, Portugal

    Get PDF
    In many countries, historical buildings were built with masonry walls constituted by random assemblages of stones of variable dimensions and shapes. The analysis of historic masonry structures requires often complex and expensive computational tools that in many cases are difficult to handle, given this large variability of masonry. The present paper validates a methodology for the characterization of the ultimate response of quasi periodic masonry. For this purpose, the behaviour at collapse of a wall at the Guimarães castle in Portugal is investigated by means of a rigid-plastic homogenization procedure, accounting for the actual disposition of the blocks constituting the walls and the texture irregularity given by the variability of dimensions in the blocks. A detailed geometric survey is conducted by means of the laser scanning technique, allowing for a precise characterization of dimensions and disposition of the blocks. After a simplification of the geometry and assuming mortar joints reduced to interfaces, homogenized masonry in- and out-of-plane strength domains are evaluated on a number of different Representing Elements of Volume (RVEs) having different sizes and sampled on the walls of the castle. Strength domains are obtained using a Finite Element (FE) limit analysis approach with a heterogeneous discretization of the RVEs with triangular elements representing units and interfaces (mortar joints), at different orientations of the principal actions with respect to the horizontal direction. The role played by vertical compression is also investigated, considering the case of masonry with weak and strong mortar. Finally, a series of limit analyses are carried out at structural level, using two different FE numerical models of the so-called Alcaçova wall, a representative perimeter wall of the caste. The first model is built with a heterogeneous material and the second model is built with a homogeneous material obtained through the homogenization procedure performed previously. The purpose is to determinate the reliability of results, in terms of limit load and failure mechanism, for the homogenized model and to compare these results to the ones obtained with the heterogeneous model
    • …
    corecore