20 research outputs found

    Exploiting spatial and temporal coherence in GPU-based volume rendering

    Full text link
    Effizienz spielt eine wichtige Rolle bei der Darstellung von Volumendaten, selbst wenn leistungsstarke Grafikhardware zur Verfügung steht, da steigende Datensatzgrößen und höhere Anforderungen an Visualisierungstechniken Fortschritte bei Grafikprozessoren ausgleichen. In dieser Dissertation wird untersucht, wie räumliche und zeitliche Kohärenz in Volumendaten zur Optimierung von Volumenrendering genutzt werden kann. Es werden mehrere neue Ansätze für statische und zeitvariante Daten eingeführt, die verschieden Arten von Kohärenz in verschiedenen Stufen der Volumenrendering-Pipeline ausnutzen. Zu den vorgestellten Beschleunigungstechniken gehört Empty Space Skipping mittels Occlusion Frustums, eine auf Slabs basierende Cachestruktur für Raycasting und ein verlustfreies Kompressionsscheme für zeitvariante Daten. Die Algorithmen wurden zur Verwendung mit GPU-basiertem Volumen-Raycasting entworfen und nutzen die Fähigkeiten moderner Grafikprozessoren, insbesondere Stream Processing. Efficiency is a key aspect in volume rendering, even if powerful graphics hardware is employed, since increasing data set sizes and growing demands on visualization techniques outweigh improvements in graphics processor performance. This dissertation examines how spatial and temporal coherence in volume data can be used to optimize volume rendering. Several new approaches for static as well as for time-varying data sets are introduced, which exploit different types of coherence in different stages of the volume rendering pipeline. The presented acceleration algorithms include empty space skipping using occlusion frustums, a slab-based cache structure for raycasting, and a lossless compression scheme for time-varying data. The algorithms were designed for use with GPU-based volume raycasting and to efficiently exploit the features of modern graphics processors, especially stream processing

    Unwind: Interactive Fish Straightening

    Full text link
    The ScanAllFish project is a large-scale effort to scan all the world's 33,100 known species of fishes. It has already generated thousands of volumetric CT scans of fish species which are available on open access platforms such as the Open Science Framework. To achieve a scanning rate required for a project of this magnitude, many specimens are grouped together into a single tube and scanned all at once. The resulting data contain many fish which are often bent and twisted to fit into the scanner. Our system, Unwind, is a novel interactive visualization and processing tool which extracts, unbends, and untwists volumetric images of fish with minimal user interaction. Our approach enables scientists to interactively unwarp these volumes to remove the undesired torque and bending using a piecewise-linear skeleton extracted by averaging isosurfaces of a harmonic function connecting the head and tail of each fish. The result is a volumetric dataset of a individual, straight fish in a canonical pose defined by the marine biologist expert user. We have developed Unwind in collaboration with a team of marine biologists: Our system has been deployed in their labs, and is presently being used for dataset construction, biomechanical analysis, and the generation of figures for scientific publication

    Interactive cutting operations for generating anatomical illustrations from volumetric data sets

    Get PDF
    In anatomical illustrations deformation is often used to increase expressivity, to improve spatial comprehension and to enable an unobstructed view onto otherwise occluded structures. Based on our analysis and classification of deformations frequently found in anatomical textbooks we introduce a technique for interactively creating such deformations of volumetric data acquired with medical scanners. Our approach exploits the 3D ChainMail algorithm in combination with a GPU-based ray-casting renderer in order to perform deformations. Thus complex, interactive deformations become possible without a costly preprocessing or the necessity to reduce the data set resolution. For cutting operations we provide a template-based interaction technique which supports precise control of the cutting parameters. For commonly used deformation operations we provide adaptable interaction templates, whereas arbitrary deformations can be specified by using a point-and-drag interface

    Slab-Based Raycasting : Efficient Volume Rendering with CUDA

    No full text
    GPU-based raycasting is the state-of-the-art rendering technique for interactive volume visualization. The ray traversal is usually implemented in a fragment shader, utilizing the hardware in a way that was not originally intended. New programming interfaces for stream processing, such as CUDA, support a more general programming model and the use of additional device features, which are not accessible through traditional shader programming.We propose a slab-based raycasting technique that is modeled specifically to use these features to accelerate volume rendering. This technique is based on experience gained from comparing fragment shader implementations of basic raycasting to implementations directly translated to CUDA kernels. The comparison covers direct volume rendering with a variety of optional features, e.g., gradient and lighting calculations

    Slab-Based Raycasting: Efficient Volume Rendering with CUDA, High Performance Graphics

    No full text
    GPU-based raycasting [Krüger and Westermann 2003] is the state-of-the-art rendering technique for interactive volume visualization. The ray traversal is usually implemented in a fragment shader, utilizing the hardware in a way that was not originally intended. New programming interfaces for stream processing, such as CUDA, support a more general programming model and the use of additional device features, which are not accessible through traditional shader programming. We propose a slab-based raycasting technique that is modeled specifically to use these features to accelerate volume rendering. This technique is based on experience gained from comparing fragment shader implementations of basic raycasting to implementations directly translated to CUDA kernels. The comparison covers direct volume rendering with a variety of optional features, e.g., gradient and lighting calculations. Volume Raycasting with CUDA • As a preliminary test, we have analyzed basic raycasting with full Phong lighting, which requires multiple texture fetches per sample point. • GLSL shaders from an existing volume rendering system were ported. • Entry and exit points were generated by rendering a proxy geometry. • No major speedup was expected for simply translating the raycasting shaders to CUDA kernels, as they use the same hardware. • Result: Speedups of up to 30 % were reached, compared to OpenGL fragment shaders (depends on data set, GPU

    An Advanced Volume Raycasting Technique using GPU Stream Processing

    No full text
    GPU-based raycasting is the state-of-the-art rendering technique for interactive volume visualization. The ray traversal is usually implemented in a fragment shader, utilizing the hardware in a way that was not originally intended. New programming interfaces for stream processing, such as CUDA, support a more general programming model and the use of additional device features, which are not accessible through traditional shader programming. In this paper we propose a slab-based raycasting technique that is modeled specifically to use these features to accelerate volume rendering. This technique is based on experience gained from comparing fragment shader implementations of basic raycasting to implementations directly translated to CUDA kernels. The comparison covers direct volume rendering with a variety of optional features, e.g., gradient and lighting calculations. Our findings are supported by benchmarks of typical volume visualization scenarios. We conclude that new stream processing models can only gain a small performance advantage when directly porting the basic raycasting algorithm. However, they can be advantageous through novel acceleration methods which use the hardware features not available to shader implementations

    The image of city in LTV1 broadcast "Adreses" and its audience perceptions

    No full text
    Bakalaura darba “Pilsētas tēls LTV1 raidījumā “Adreses” un tā auditorijas uztverē” mērķis ir noskaidrot, kādu tēlu par pilsētu ir izveidojuši raidījuma “Adreses” veidotāji un kādu to uztver auditorija. Darbā uzdotie pētījuma jautājumi ir: vai un kā atšķiras pilsētas tēls raidījumā un tā auditorijas uztverē un kādi ir kopīgie tēli un kādi – atšķirīgie šajās abās grupās. Darba teorētiskā literatūra balstīta uz Klausa Mertena tēla teoriju, apskatīta vides komunikācija, vides žurnālistika, sabiedrisko mediju funkcijas, dokumentālie raidījumi televīzijā. Metodoloģiskajā daļā izmantota kvalitatīvās kontentanalīzes induktīvā pieeja, semiotiskā analīze, attieksmes un tēla mērīšana ar semantiskā diferenciāļa skalu un fokusgrupas intervijas. Analīzes rezultāti liecina, ka, lai gan kopējais pilsētas tēls ir pozitīvs, raidījumā negatīvs tēls tiek veidots Ķemeriem, bet raidījuma auditorija negatīvi uztver Ķemeru un JRT jaunās ēkas tēlu. Atslēgas vārdi: pilsētas tēls, vides komunikācija, uztvere, raidījums Adreses, LTV1, auditorijaThe aim of the bachelor thesis “Image of city in LTV1 broadcast “Adreses” and its audience perceptions” is to find out images of city, created by broadcasters and perceived by its audiences. Research questions are the following: how images of city created by broadcasters differ from images perceived by audiences and which images are common and which ones are different within these groups. The theoretical part introduces with definition of image, image theory by Klaus Merten, environmental communication, environmental journalism, public broadcast media, its functions, television and documental broadcasts. Qualitative content analysis, semiotic analysis, attitude and image measurement with semantic differential scale and focus group interviews are considered in the methodological part. The results of empirical analysis revealed that although the common image of city in LTV1 broadcast “Adreses” is positive, the negative image in the broadcast is created for Ķemeri, but images of Ķemeri and JRT new building is negatively perceived by audience representatives. Keywords: image of city, environmental communication, perception, broadcast Adreses, LTV1, audience

    A GPU-Supported Lossless Compression Scheme for Rendering Time-Varying Volume Data

    No full text
    Since the size of time-varying volumetric data sets typically exceeds the amount of available GPU and main memory, out-of-core streaming techniques are required to support interactive rendering. To deal with the performance bottlenecks of hard-disk transfer rate and graphics bus bandwidth, we present a hybrid CPU/GPU scheme for lossless compression and data streaming that combines a temporal prediction model, which allows to exploit coherence between time steps, and variable-length coding with a fast block compression algorithm. This combination becomes possible by exploiting the CUDA computing architecture for unpacking and assembling data packets on the GPU. The system allows near-interactive performance even for rendering large real-world data sets with a low signal-to-noise-ratio, while not degrading image quality. It uses standard volume raycasting and can be easily combined with existing acceleration methods and advanced visualization techniques
    corecore