57,279 research outputs found

    Poster: Making Edge-assisted LiDAR Perceptions Robust to Lossy Point Cloud Compression

    Full text link
    Real-time light detection and ranging (LiDAR) perceptions, e.g., 3D object detection and simultaneous localization and mapping are computationally intensive to mobile devices of limited resources and often offloaded on the edge. Offloading LiDAR perceptions requires compressing the raw sensor data, and lossy compression is used for efficiently reducing the data volume. Lossy compression degrades the quality of LiDAR point clouds, and the perception performance is decreased consequently. In this work, we present an interpolation algorithm improving the quality of a LiDAR point cloud to mitigate the perception performance loss due to lossy compression. The algorithm targets the range image (RI) representation of a point cloud and interpolates points at the RI based on depth gradients. Compared to existing image interpolation algorithms, our algorithm shows a better qualitative result when the point cloud is reconstructed from the interpolated RI. With the preliminary results, we also describe the next steps of the current work.Comment: extended abstract of 2 pages, 2 figures, 1 tabl

    Data-Importance-Aware Bandwidth-Allocation Scheme for Point-Cloud Transmission in Multiple LIDAR Sensors

    Get PDF
    This paper addresses bandwidth allocation to multiple light detection and ranging (LIDAR) sensors for smart monitoring, which a limited communication capacity is available to transmit a large volume of point-cloud data from the sensors to an edge server in real time. To deal with the limited capacity of the communication channel, we propose a bandwidth-allocation scheme that assigns multiple point-cloud compression formats to each LIDAR sensor in accordance with the spatial importance of the point-cloud data transmitted by the sensor. Spatial importance is determined by estimating how objects, such as cars, trucks, bikes, and pedestrians, are likely to exist since regions where objects are more likely to exist are more useful for smart monitoring. A numerical study using a real point-cloud dataset obtained at an intersection indicates that the proposed scheme is superior to the benchmarks in terms of the distributions of data volumes among LIDAR sensors and quality of point-cloud data received by the edge server

    Point Cloud Compression and Low Latency Streaming

    Get PDF
    Title from PDF of title page viewed May 16, 2018Thesis advisor: Zhu LiVitaIncludes bibliographical references (page 25-26)Thesis (M.S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2017With the commoditization of the 3D depth sensors, we can now very easily model real objects and scenes into digital domain which then can be used for variety of application in gaming, animation, virtual reality, immersive communication etc. Modern sensors are capable of capturing objects with very high detail and scene of large area and thus might include millions of points. These point data usually occupy large storage space or require high bandwidth in case of real-time transmission. Thus, an efficient compression of these huge point cloud data points becomes necessary. Point clouds are often organized and compressed with octree based structures. The octree subdivision sequence is often serialized in a sequence of bytes that are subsequently entropy encoded using range coding, arithmetic coding or other methods. Such octree based algorithms are efficient only up to a certain level of detail as they have an exponential run-time in the number of subdivision levels. In addition, the compression efficiency diminishes when the number of subdivision levels increases. In this work we present an alternative way to partition the point cloud data. The point cloud is divided based on the data partition using kd tree binary division instead of Octree’s space partition method and forming a base layer. In base layer leaf nodes, the distribution of points is considered and projected to a 2D plane based on the flatness of the node points. Octree and Quadtree based partition is used to further convert the data to bitstreams. These are scalable point cloud bitstreams as we need only specific number of kd nodes in each time for a specific point of view. The use case is navigation in autonomous vehicles where it requires point cloud information up to a specific distance at different speeds. These scalable bitstreams of kd nodes can be used in real time transmission with low latency. Results show that compression performance is improved for geometry compression in point clouds and a scalable low latency streaming model is shown for navigation use case.Introduction -- Background -- Experimental and computational details -- Conclusion -- Appendi

    Coder Source Code

    Get PDF
    Point clouds are representations of three-dimensional (3D) objects in the form of a sample of points on their surface. Point clouds are receiving increased attention from academia and industry due to their potential for many important applications, such as real-time 3D immersive telepresence, automotive and robotic navigation, as well as medical imaging. Compared to traditional video technology, point cloud systems allow free viewpoint rendering, as well as mixing of natural and synthetic objects. However, this improved user experience comes at the cost of increased storage and bandwidth requirements as point clouds are typically represented by the geometry and colour (texture) of millions up to billions of 3D points. For this reason, major efforts are being made to develop efficient point cloud compression schemes. However, the task is very challenging, especially for dynamic point clouds (sequences of point clouds), due to the irregular structure of point clouds (the number of 3D points may change from frame to frame, and the points within each frame are not uniformly distributed in 3D space). To standardize point cloud compression (PCC) technologies, the Moving Picture Experts Group (MPEG) launched a call for proposals in 2017. As a result, three point cloud compression technologies were developed: surface point cloud compression (S-PCC) for static point cloud data, video-based point cloud compression (V-PCC) for dynamic content, and LIDAR point cloud compression (L-PCC) for dynamically acquired point clouds. Later, L-PCC and S-PCC were merged under the name geometry-based point cloud compression (G-PCC). The aim of the OPT-PCC project is to develop algorithms that optimise the rate-distortion performance [i.e., minimize the reconstruction error (distortion) for a given bit budget] of V-PCC. The objectives of the project are to: 1. O1: build analytical models that accurately describe the effect of the geometry and colour quantization of a point cloud on the bit rate and distortion; 2. O2: use O1 to develop fast search algorithms that optimise the allocation of the available bit budget between the geometry information and colour information; 3. O3: implement a compression scheme for dynamic point clouds that exploits O2 to outperform the state-of-the-art in terms of rate-distortion performance. The target is to reduce the bit rate by at least 20% for the same reconstruction quality; 4. O4: provide multi-disciplinary training to the researcher in algorithm design, metaheuristic optimisation, computer graphics, media production, and leadership and management skills. As part of O3, this deliverable gives the source code of the algorithms used in the project to optimize the rate-distortion performance of V-PCC

    Report on the Bit Allocation Solution

    Get PDF
    Point clouds are representations of three-dimensional (3D) objects in the form of a sample of points on their surface. Point clouds are receiving increased attention from academia and industry due to their potential for many important applications, such as real-time 3D immersive telepresence, automotive and robotic navigation, as well as medical imaging. Compared to traditional video technology, point cloud systems allow free viewpoint rendering, as well as mixing of natural and synthetic objects. However, this improved user experience comes at the cost of increased storage and bandwidth requirements as point clouds are typically represented by the geometry and colour (texture) of millions up to billions of 3D points. For this reason, major efforts are being made to develop efficient point cloud compression schemes. However, the task is very challenging, especially for dynamic point clouds (sequences of point clouds), due to the irregular structure of point clouds (the number of 3D points may change from frame to frame, and the points within each frame are not uniformly distributed in 3D space). To standardize point cloud compression (PCC) technologies, the Moving Picture Experts Group (MPEG) launched a call for proposals in 2017. As a result, three point cloud compression technologies were developed: surface point cloud compression (S-PCC) for static point cloud data, video-based point cloud compression (V-PCC) for dynamic content, and LIDAR point cloud compression (L-PCC) for dynamically acquired point clouds. Later, L-PCC and S-PCC were merged under the name geometry-based point cloud compression (G-PCC). The aim of the OPT-PCC project is to develop algorithms that optimise the rate-distortion performance [i.e., minimize the reconstruction error (distortion) for a given bit budget] of V-PCC. The objectives of the project are to: 1. O1: build analytical models that accurately describe the effect of the geometry and colour quantization of a point cloud on the bit rate and distortion; 2. O2: use O1 to develop fast search algorithms that optimise the allocation of the available bit budget between the geometry information and colour information; 3. O3: implement a compression scheme for dynamic point clouds that exploits O2 to outperform the state-of-the-art in terms of rate-distortion performance. The target is to reduce the bit rate by at least 20% for the same reconstruction quality; 4. O4: provide multi-disciplinary training to the researcher in algorithm design, metaheuristic optimisation, computer graphics, media production, and leadership and management skills. This deliverable reports on the work undertaken in this project to achieve objective O2. Section 1 introduces the rate-distortion optimization problem for V-PCC. Section 2 reviews previous work. Section 3 presents our fast search algorithms. Section 4 gives experimental results. Section 5 gives our conclusions

    Proposal to the MPEG 3DG Standardization Committee

    Get PDF
    Point clouds are representations of three-dimensional (3D) objects in the form of a sample of points on their surface. Point clouds are receiving increased attention from academia and industry due to their potential for many important applications, such as real-time 3D immersive telepresence, automotive and robotic navigation, as well as medical imaging. Compared to traditional video technology, point cloud systems allow free viewpoint rendering, as well as mixing of natural and synthetic objects. However, this improved user experience comes at the cost of increased storage and bandwidth requirements as point clouds are typically represented by the geometry and colour (texture) of millions up to billions of 3D points. For this reason, major efforts are being made to develop efficient point cloud compression schemes. However, the task is very challenging, especially for dynamic point clouds (sequences of point clouds), due to the irregular structure of point clouds (the number of 3D points may change from frame to frame, and the points within each frame are not uniformly distributed in 3D space). To standardize point cloud compression (PCC) technologies, the Moving Picture Experts Group (MPEG) launched a call for proposals in 2017. As a result, three point cloud compression technologies were developed: surface point cloud compression (S-PCC) for static point cloud data, video-based point cloud compression (V-PCC) for dynamic content, and LIDAR point cloud compression (L-PCC) for dynamically acquired point clouds. Later, L-PCC and S-PCC were merged under the name geometry-based point cloud compression (G-PCC). The aim of the OPT-PCC project is to develop algorithms that optimise the rate-distortion performance [i.e., minimize the reconstruction error (distortion) for a given bit budget] of V-PCC. The objectives of the project are to: 1. O1: build analytical models that accurately describe the effect of the geometry and colour quantization of a point cloud on the bit rate and distortion; 2. O2: use O1 to develop fast search algorithms that optimise the allocation of the available bit budget between the geometry information and colour information; 3. O3: implement a compression scheme for dynamic point clouds that exploits O2 to outperform the state-of-the-art in terms of rate-distortion performance. The target is to reduce the bit rate by at least 20% for the same reconstruction quality; 4. O4: provide multi-disciplinary training to the researcher in algorithm design, metaheuristic optimisation, computer graphics, media production, and leadership and management skills. This deliverable is a proposal to the MPEG 3D Graphics Coding standardization committee, which was submitted on 27 June 2021 and presented to the committee on 13 July 2021 at the 4th WG7 Meeting. The proposal presents results from work undertaken as part of objectives O1, O2, and O3
    corecore