7,399 research outputs found

    Scalable wavelet-based coding of irregular meshes with interactive region-of-interest support

    Get PDF
    This paper proposes a novel functionality in wavelet-based irregular mesh coding, which is interactive region-of-interest (ROI) support. The proposed approach enables the user to define the arbitrary ROIs at the decoder side and to prioritize and decode these regions at arbitrarily high-granularity levels. In this context, a novel adaptive wavelet transform for irregular meshes is proposed, which enables: 1) varying the resolution across the surface at arbitrarily fine-granularity levels and 2) dynamic tiling, which adapts the tile sizes to the local sampling densities at each resolution level. The proposed tiling approach enables a rate-distortion-optimal distribution of rate across spatial regions. When limiting the highest resolution ROI to the visible regions, the fine granularity of the proposed adaptive wavelet transform reduces the required amount of graphics memory by up to 50%. Furthermore, the required graphics memory for an arbitrary small ROI becomes negligible compared to rendering without ROI support, independent of any tiling decisions. Random access is provided by a novel dynamic tiling approach, which proves to be particularly beneficial for large models of over 10(6) similar to 10(7) vertices. The experiments show that the dynamic tiling introduces a limited lossless rate penalty compared to an equivalent codec without ROI support. Additionally, rate savings up to 85% are observed while decoding ROIs of tens of thousands of vertices

    GPU-based Streaming for Parallel Level of Detail on Massive Model Rendering

    Get PDF
    Rendering massive 3D models in real-time has long been recognized as a very challenging problem because of the limited computational power and memory space available in a workstation. Most existing rendering techniques, especially level of detail (LOD) processing, have suffered from their sequential execution natures, and does not scale well with the size of the models. We present a GPU-based progressive mesh simplification approach which enables the interactive rendering of large 3D models with hundreds of millions of triangles. Our work contributes to the massive rendering research in two ways. First, we develop a novel data structure to represent the progressive LOD mesh, and design a parallel mesh simplification algorithm towards GPU architecture. Second, we propose a GPU-based streaming approach which adopt a frame-to-frame coherence scheme in order to minimize the high communication cost between CPU and GPU. Our results show that the parallel mesh simplification algorithm and GPU-based streaming approach significantly improve the overall rendering performance

    Coarse-grained Multiresolution Structures for Mobile Exploration of Gigantic Surface Models

    Get PDF
    We discuss our experience in creating scalable systems for distributing and rendering gigantic 3D surfaces on web environments and common handheld devices. Our methods are based on compressed streamable coarse-grained multiresolution structures. By combining CPU and GPU compression technology with our multiresolution data representation, we are able to incrementally transfer, locally store and render with unprecedented performance extremely detailed 3D mesh models on WebGL-enabled browsers, as well as on hardware-constrained mobile devices

    Scalable Real-Time Rendering for Extremely Complex 3D Environments Using Multiple GPUs

    Get PDF
    In 3D visualization, real-time rendering of high-quality meshes in complex 3D environments is still one of the major challenges in computer graphics. New data acquisition techniques like 3D modeling and scanning have drastically increased the requirement for more complex models and the demand for higher display resolutions in recent years. Most of the existing acceleration techniques using a single GPU for rendering suffer from the limited GPU memory budget, the time-consuming sequential executions, and the finite display resolution. Recently, people have started building commodity workstations with multiple GPUs and multiple displays. As a result, more GPU memory is available across a distributed cluster of GPUs, more computational power is provided throughout the combination of multiple GPUs, and a higher display resolution can be achieved by connecting each GPU to a display monitor (resulting in a tiled large display configuration). However, using a multi-GPU workstation may not always give the desired rendering performance due to the imbalanced rendering workloads among GPUs and overheads caused by inter-GPU communication. In this dissertation, I contribute a multi-GPU multi-display parallel rendering approach for complex 3D environments. The approach has the capability to support a high-performance and high-quality rendering of static and dynamic 3D environments. A novel parallel load balancing algorithm is developed based on a screen partitioning strategy to dynamically balance the number of vertices and triangles rendered by each GPU. The overhead of inter-GPU communication is minimized by transferring only a small amount of image pixels rather than chunks of 3D primitives with a novel frame exchanging algorithm. The state-of-the-art parallel mesh simplification and GPU out-of-core techniques are integrated into the multi-GPU multi-display system to accelerate the rendering process

    ViSUS: Visualization Streams for Ultimate Scalability

    Full text link

    Temporal and spatial level of details for dynamic meshes

    Get PDF
    corecore