342 research outputs found

    QuadStack: An Efficient Representation and Direct Rendering of Layered Datasets

    Get PDF
    We introduce QuadStack, a novel algorithm for volumetric data compression and direct rendering. Our algorithm exploits the data redundancy often found in layered datasets which are common in science and engineering fields such as geology, biology, mechanical engineering, medicine, etc. QuadStack first compresses the volumetric data into vertical stacks which are then compressed into a quadtree that identifies and represents the layered structures at the internal nodes. The associated data (color, material, density, etc.) and shape of these layer structures are decoupled and encoded independently, leading to high compression rates (4Ă— to 54Ă— of the original voxel model memory footprint in our experiments). We also introduce an algorithm for value retrieving from the QuadStack representation and we show that the access has logarithmic complexity. Because of the fast access, QuadStack is suitable for efficient data representation and direct rendering. We show that our GPU implementation performs comparably in speed with the state-of-the-art algorithms (18-79 MRays/s in our implementation), while maintaining a significantly smaller memory footprint

    Hybrid Sample-based Surface Rendering

    Get PDF
    The performance of rasterization-based rendering on current GPUs strongly depends on the abilities to avoid overdraw and to prevent rendering triangles smaller than the pixel size. Otherwise, the rates at which highresolution polygon models can be displayed are affected significantly. Instead of trying to build these abilities into the rasterization-based rendering pipeline, we propose an alternative rendering pipeline implementation that uses rasterization and ray-casting in every frame simultaneously to determine eye-ray intersections. To make ray-casting competitive with rasterization, we introduce a memory-efficient sample-based data structure which gives rise to an efficient ray traversal procedure. In combination with a regular model subdivision, the most optimal rendering technique can be selected at run-time for each part. For very large triangle meshes our method can outperform pure rasterization and requires a considerably smaller memory budget on the GPU. Since the proposed data structure can be constructed from any renderable surface representation, it can also be used to efficiently render isosurfaces in scalar volume fields. The compactness of the data structure allows rendering from GPU memory when alternative techniques already require exhaustive paging

    A hybrid representation for modeling, interactive editing, and real-time visualization of terrains with volumetric features

    Get PDF
    Cataloged from PDF version of article.Terrain rendering is a crucial part of many real-time applications. The easiest way to process and visualize terrain data in real time is to constrain the terrain model in several ways. This decreases the amount of data to be processed and the amount of processing power needed, but at the cost of expressivity and the ability to create complex terrains. The most popular terrain representation is a regular 2D grid, where the vertices are displaced in a third dimension by a displacement map, called a heightmap. This is the simplest way to represent terrain, and although it allows fast processing, it cannot model terrains with volumetric features. Volumetric approaches sample the 3D space by subdividing it into a 3D grid and represent the terrain as occupied voxels. They can represent volumetric features but they require computationally intensive algorithms for rendering, and their memory requirements are high. We propose a novel representation that combines the voxel and heightmap approaches, and is expressive enough to allow creating terrains with caves, overhangs, cliffs, and arches, and efficient enough to allow terrain editing, deformations, and rendering in real time

    Efficient Hybrid Image Warping for High Frame-Rate Stereoscopic Rendering

    Get PDF
    Modern virtual reality simulations require a constant high-frame rate from the rendering engine. They may also require very low latency and stereo images. Previous rendering engines for virtual reality applications have exploited spatial and temporal coherence by using image-warping to re-use previous frames or to render a stereo pair at lower cost than running the full render pipeline twice. However these previous approaches have shown artifacts or have not scaled well with image size. We present a new image-warping algorithm that has several novel contributions: an adaptive grid generation algorithm for proxy geometry for image warping; a low-pass hole-filling algorithm to address un-occlusion; and support for transparent surfaces by efficiently ray casting transparent fragments stored in per-pixel linked lists of an A-Buffer. We evaluate our algorithm with a variety of challenging test cases. The results show that it achieves better quality image-warping than state-of-the-art techniques and that it can support transparent surfaces effectively. Finally, we show that our algorithm can achieve image warping at rates suitable for practical use in a variety of applications on modern virtual reality equipment

    Improving Efficiency for CUDA-based Volume Rendering by Combining Segmentation and Modified Sampling Strategies

    Get PDF
    The objective of this paper is to present a speed-up method to improve the rendering speed of ray casting at the same time obtaining high-quality images. Ray casting is the most commonly used volume rendering algorithm, and suitable for parallel processing. In order to improve the efficiency of parallel processing, the latest platform-Compute Unified Device Architecture (CUDA) is used. The speed-up method uses improved workload allocation and sampling strategies according to CUDA features. To implement this method, the optimal number of segments of each ray is dynamically selected based on the change of the corresponding visual angle, and each segment is processed by a distinct thread processor. In addition, for each segment, we apply different sampling quantity and density according to the distance weight. Rendering speed results show that our method achieves an average 70% improvement in terms of speed, and even 145% increase in some special cases, compared to conventional ray casting on Graphics Processing Unit (GPU). Speed-up ratio shows that this method can effectively improve the factors that influence efficiency of rendering. Excellent rendering performance makes this method contribute to real-time 3-D reconstruction

    Scalable visualization of spatial data in 3D terrain

    Get PDF
    Designing visualizations of spatial data in 3D terrain is challenging because various heterogeneous data aspects need to be considered, including the terrain itself, multiple data attributes, and data uncertainty. It is hardly possible to visualize these data at full detail in a single image. Therefore, this thesis devises a scalable visualization approach that focuses on relevant information to be emphasized, while less-relevant information can be attenuated. In this context, a noval concept of visualizing spatial data in 3D terrain and different soft- and hardware solutions are proposed.Die Erstellung von Visualisierungen für räumliche Daten im 3D-Gelände ist schwierig, da viele heterogene Datenaspekte wie das Gelände selbst, die verschiedenen Datenattribute sowie Unsicherheiten bei der Darstellung zu berücksichtigen sind. Im Allgemeinen ist es nicht möglich, diese Datenaspekte gleichzeitig in einer Visualisierung darzustellen. Daher werden in der Arbeit skalierbare Visualisierungsstrategien entwickelt, welche die wichtigen Informationen hervorheben und trotzdem gleichzeitig Kontextinformationen liefern. Hierfür werden neue Systematisierungen und Konzepte vorgestellt

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes

    Advances in massive model visualization in the CYBERSAR project

    Get PDF
    We provide a survey of the major results obtained within the CYBERSAR project in the area of massive data visualization. Despite the impressive improvements in graphics and computational hardware performance, interactive visualization of massive models still remains a challenging problem. To address this problem, we developed methods that exploit the programmability of latest generation graphics hardware, and combine coarse-grained multiresolution models, chunk-based data management with compression, incremental view-dependent level-of-detail selection, and visibility culling. The models that can be interactively rendered with our methods range from multi-gigabyte-sized datasets for general 3D meshes or scalar volumes, to terabyte-sized datasets in the restricted 2.5D case of digital terrain models. Such a performance enables novel ways of exploring massive datasets. In particular, we have demonstrated the capability of driving innovative light field displays able of giving multiple freely moving naked-eye viewers the illusion of seeing and manipulating massive 3D objects with continuous viewer-independent parallax.233-23

    CYBERSAR

    Get PDF
    The project aims at setting up an advanced cyberinfrastructure based on dedicated optical networks to support collaborative research application. The aim of Cybersar computational infrastructure is to support innovative computational applications by using leading edge hardware and technological solutions and to provide an experimental platform for research on the enabling technologies that will power next generation cyberinfrastructures. In particular, in the Visual Computing Group, we study techniques for processing and rendering very large scale 3D datasets on innovative large scale displays.Completato€ 1.291.50
    • …
    corecore