482 research outputs found

    An Interactive Concave Volume Clipping Method Based on GPU Ray Casting with Boolean Operation

    Get PDF
    Volume clipping techniques can display inner structures and avoid difficulties in specifying an appropriate transfer function. We present an interactive concave volume clipping method by implementing both rendering and Boolean operation on GPU. Common analytical convex objects, such as polyhedrons and spheres, are determined by parameters. So it consumes very little video memory to implement concave volume clipping with Boolean operations on GPU. The intersection, subtraction and union operations are implemented on GPU by converting 3D Boolean operation into 1D Boolean operation. To enhance visual effects, a pseudo color based rendering model is proposed and the Phong illumination model is enabled on the clipped surfaces. Users are allowed to select a color scheme from several schemes that are pre-defined or specified by users, to obtain clear views of inner anatomical structures. At last, several experiments were performed on a standard PC with a GeForce FX8600 graphics card. Experimental results show that the three basic Boolean operations are correctly performed, and our approach can freely clip and visualize volumetric datasets at interactive frame rates

    MFA-DVR: Direct Volume Rendering of MFA Models

    Get PDF
    3D volume rendering is widely used to reveal insightful intrinsic patterns of volumetric datasets across many domains. However, the complex structures and varying scales of volumetric data can make efficiently generating high-quality volume rendering results a challenging task. Multivariate functional approximation (MFA) is a new data model that addresses some of the critical challenges: high-order evaluation of both value and derivative anywhere in the spatial domain, compact representation for large-scale volumetric data, and uniform representation of both structured and unstructured data. In this paper, we present MFA-DVR, the first direct volume rendering pipeline utilizing the MFA model, for both structured and unstructured volumetric datasets. We demonstrate improved rendering quality using MFA-DVR on both synthetic and real datasets through a comparative study. We show that MFA-DVR not only generates more faithful volume rendering than using local filters but also performs faster on high-order interpolations on structured and unstructured datasets. MFA-DVR is implemented in the existing volume rendering pipeline of the Visualization Toolkit (VTK) to be accessible by the scientific visualization community

    High performance computer simulated bronchoscopy with interactive navigation.

    Get PDF
    by Ping-Fu Fung.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 98-102).Abstract also in Chinese.Abstract --- p.ivAcknowledgements --- p.viChapter 1 --- Introduction --- p.1Chapter 1.1 --- Medical Visualization System --- p.4Chapter 1.1.1 --- Data Acquisition --- p.4Chapter 1.1.2 --- Computer-aided Medical Visualization --- p.5Chapter 1.1.3 --- Existing Systems --- p.6Chapter 1.2 --- Research Goal --- p.8Chapter 1.2.1 --- System Architecture --- p.9Chapter 1.3 --- Organization of this Thesis --- p.10Chapter 2 --- Volume Visualization --- p.11Chapter 2.1 --- Sampling Grid and Volume Representation --- p.11Chapter 2.2 --- Priori Work in Volume Rendering --- p.13Chapter 2.2.1 --- Surface VS Direct --- p.14Chapter 2.2.2 --- Image-order VS Object-order --- p.18Chapter 2.2.3 --- Orthogonal VS Perspective --- p.22Chapter 2.2.4 --- Hardware Acceleration VS Software Acceleration --- p.23Chapter 2.3 --- Chapter Summary --- p.29Chapter 3 --- IsoRegion Leaping Technique for Perspective Volume Rendering --- p.30Chapter 3.1 --- Compositing Projection in Direct Volume Rendering --- p.31Chapter 3.2 --- IsoRegion Leaping Acceleration --- p.34Chapter 3.2.1 --- IsoRegion Definition --- p.35Chapter 3.2.2 --- IsoRegion Construction --- p.37Chapter 3.2.3 --- IsoRegion Step Table --- p.38Chapter 3.2.4 --- Ray Traversal Scheme --- p.41Chapter 3.3 --- Experiment Result --- p.43Chapter 3.4 --- Improvement --- p.47Chapter 3.5 --- Chapter Summary --- p.48Chapter 4 --- Parallel Volume Rendering by Distributed Processing --- p.50Chapter 4.1 --- Multi-platform Loosely-coupled Parallel Environment Shell --- p.51Chapter 4.2 --- Distributed Rendering Pipeline (DRP) --- p.55Chapter 4.2.1 --- Network Architecture of a Loosely-Coupled System --- p.55Chapter 4.2.2 --- Data and Task Partitioning --- p.58Chapter 4.2.3 --- Communication Pattern and Analysis --- p.59Chapter 4.3 --- Load Balancing --- p.69Chapter 4.4 --- Heterogeneous Rendering --- p.72Chapter 4.5 --- Chapter Summary --- p.73Chapter 5 --- User Interface --- p.74Chapter 5.1 --- System Design --- p.75Chapter 5.2 --- 3D Pen Input Device --- p.76Chapter 5.3 --- Visualization Environment Integration --- p.77Chapter 5.4 --- User Interaction: Interactive Navigation --- p.78Chapter 5.4.1 --- Camera Model --- p.79Chapter 5.4.2 --- Zooming --- p.81Chapter 5.4.3 --- Image View --- p.82Chapter 5.4.4 --- User Control --- p.83Chapter 5.5 --- Chapter Summary --- p.87Chapter 6 --- Conclusion --- p.88Chapter 6.1 --- Final Summary --- p.88Chapter 6.2 --- Deficiency and Improvement --- p.89Chapter 6.3 --- Future Research Aspect --- p.91Appendix --- p.93Chapter A --- Common Error in Pre-multiplying Color and Opacity --- p.94Chapter B --- Binary Factorization of the Sample Composition Equation --- p.9

    Meshless Mechanics and Point-Based Visualization Methods for Surgical Simulations

    Get PDF
    Computer-based modeling and simulation practices have become an integral part of the medical education field. For surgical simulation applications, realistic constitutive modeling of soft tissue is considered to be one of the most challenging aspects of the problem, because biomechanical soft-tissue models need to reflect the correct elastic response, have to be efficient in order to run at interactive simulation rates, and be able to support operations such as cuts and sutures. Mesh-based solutions, where the connections between the individual degrees of freedom (DoF) are defined explicitly, have been the traditional choice to approach these problems. However, when the problem under investigation contains a discontinuity that disrupts the connectivity between the DoFs, the underlying mesh structure has to be reconfigured in order to handle the newly introduced discontinuity correctly. This reconfiguration for mesh-based techniques is typically called dynamic remeshing, and most of the time it causes the performance bottleneck in the simulation. In this dissertation, the efficiency of point-based meshless methods is investigated for both constitutive modeling of elastic soft tissues and visualization of simulation objects, where arbitrary discontinuities/cuts are applied to the objects in the context of surgical simulation. The point-based deformable object modeling problem is examined in three functional aspects: modeling continuous elastic deformations with, handling discontinuities in, and visualizing a point-based object. Algorithmic and implementation details of the presented techniques are discussed in the dissertation. The presented point-based techniques are implemented as separate components and integrated into the open-source software framework SOFA. The presented meshless continuum mechanics model of elastic tissue were verified by comparing it to the Hertzian non-adhesive frictionless contact theory. Virtual experiments were setup with a point-based deformable block and a rigid indenter, and force-displacement curves obtained from the virtual experiments were compared to the theoretical solutions. The meshless mechanics model of soft tissue and the integrated novel discontinuity treatment technique discussed in this dissertation allows handling cuts of arbitrary shape. The implemented enrichment technique not only modifies the internal mechanics of the soft tissue model, but also updates the point-based visual representation in an efficient way preventing the use of costly dynamic remeshing operations

    Visually pleasing real-time global illumination rendering for fully-dynamic scenes

    Get PDF
    Global illumination (GI) rendering plays a crucial role in the photo-realistic rendering of virtual scenes. With the rapid development of graphics hardware, GI has become increasingly attractive even for real-time applications nowadays. However, the computation of physically-correct global illumination is time-consuming and cannot achieve real-time, or even interactive performance. Although the realtime GI is possible using a solution based on precomputation, such a solution cannot deal with fully-dynamic scenes. This dissertation focuses on solving these problems by introducing visually pleasing real-time global illumination rendering for fully-dynamic scenes. To this end, we develop a set of novel algorithms and techniques for rendering global illumination effects using the graphics hardware. All these algorithms not only result in real-time or interactive performance, but also generate comparable quality to the previous works in off-line rendering. First, we present a novel implicit visibility technique to circumvent expensive visibility queries in hierarchical radiosity by evaluating the visibility implicitly. Thereafter, we focus on rendering visually plausible soft shadows, which is the most important GI effect caused by the visibility determination. Based on the pre-filtering shadowmapping theory, wesuccessively propose two real-time soft shadow mapping methods: "convolution soft shadow mapping" (CSSM) and "variance soft shadow mapping" (VSSM). Furthermore, we successfully apply our CSSM method in computing the shadow effects for indirect lighting. Finally, to explore the GI rendering in participating media, we investigate a novel technique to interactively render volume caustics in the single-scattering participating media.Das Rendern globaler Beleuchtung ist für die fotorealistische Darstellung virtueller Szenen von entscheidender Bedeutung. Dank der rapiden Entwicklung der Grafik-Hardware wird die globale Beleuchtung heutzutage sogar für Echtzeitanwendungen immer attraktiver. Trotz allem ist die Berechnung physikalisch korrekter globaler Beleuchtung zeitintensiv und interaktive Laufzeiten können mit "standard Hardware" noch nicht erzielt werden. Obwohl das Rendering auf der Grundlage von Vorberechnungen in Echtzeit möglich ist, kann ein solcher Ansatz nicht auf voll-dynamische Szenen angewendet werden. Diese Dissertation zielt darauf ab, das Problem der globalen Beleuchtungsberechnung durch Einführung von neuen Techniken für voll-dynamische Szenen in Echtzeit zu lösen. Dazu stellen wir eine Reihe neuer Algorithmen vor, die die Effekte der globaler Beleuchtung auf der Grafik-Hardware berechnen. All diese Algorithmen erzielen nicht nur Echtzeit bzw. interaktive Laufzeiten sondern liefern auch eine Qualität, die mit bisherigen offline Methoden vergleichbar ist. Zunächst präsentieren wir eine neue Technik zur Berechnung impliziter Sichtbarkeit, die aufwändige Sichbarkeitstests in hierarchischen Radiosity-Datenstrukturen vermeidet. Anschliessend stellen wir eine Methode vor, die weiche Schatten, ein wichtiger Effekt für die globale Beleuchtung, in Echtzeit berechnet. Auf der Grundlage der Theorie über vorgefilterten Schattenwurf, zeigen wir nacheinander zwei Echtzeitmethoden zur Berechnung weicher Schattenwürfe: "Convolution Soft Shadow Mapping" (CSSM) und "Variance Soft Shadow Mapping" (VSSM). Darüber hinaus wenden wir unsere CSSM-Methode auch erfolgreich auf den Schatteneffekt in der indirekten Beleuchtung an. Abschliessend präsentieren wir eine neue Methode zum interaktiven Rendern von Volumen-Kaustiken in einfach streuenden, halbtransparenten Medien

    최적화된 Volume Rendering의 GPU-Speedup 개선 기법

    Get PDF
    학위논문 (석사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2015. 8. 신영길.This paper presents a speedup improvement method for optimized volume rendering in GPU platforms. First, from a set of experiments, we found that the speedup of volume rendering optimized with transparent voxel skipping decreases with dependency on the complexity of target images. In order to evaluate the complexity of volume images, we developed a new algorithm, called EVIC. Next, we present another new algorithm, called RBDV, that reduces the branch divergence in transparent voxel skipping by factoring out structurally similar code from branch paths in GPU programs. We empirically proved that this RBDV algorithm increases the GPU-speedup of transparent voxel skipping at least by 14%, improving it from x17.5 upto x20.0 or more, on average, for complex target images.Chapter 1. Introduction 7 Chapter 2. Background 9 2.1. Volume ray-casting 9 2.2. Optimization of volume rendering 11 2.3. GPU-based parallelization 13 2.4. Branch divergence 14 Chapter 3. Findings on Image Complexity Dependence 16 3.1. The complexity evaluation algorithm 16 3.2. Experimentation on image complexity 18 3.3. Analysis on image complexity 20 Chapter 4. Reducing Branch Divergence 24 4.1. The branch divergence reduction algorithm 24 4.2. Experimentation on branch divergence 28 4.3. Analysis on branch divergence 30 Chapter 5. Conclusion and Future Work 32 References 34 Abstract (in Korean) 37Maste

    A hypergraph-partitioning based remapping model for image-space parallel volume rendering

    Get PDF
    Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent Univ., 2000.Thesis (Master's) -- Bilkent University, 2000.Includes bibliographical references leaves 72-76.Cambazoğlu, Berkant BarlaM.S

    Direct volume rendering of unstructured grids

    Get PDF
    This paper investigates three categories of algorithms for direct volume rendering of unstructured grids, which are image-space, object-space, and hybrid methods. We propose three new algorithms. Cell Projection algorithm, which falls into object-space category, is capable of rendering non-convex meshes through a simple yet efficient sorting schema that exploits both image and object space coherencies. Existing hybrid methods use object-then-image traversal order that enforces the processing of each cell. Thus, these algorithms perform redundant operations and do not support early ray termination. We propose a hybrid method, called Span-Buffer Ray Casting (SBRC), that can support early ray termination discarding redundant operations by employing image-then-object traversal order. Another hybrid method, called Koyamada-SBRC (K-SBRC), is proposed with the motivation of refining image-space and hybrid methods to extract the best features of them. This method is developed by blending SBRC approach with Koyamada's algorithm, which is an efficient image-space algorithm. All proposed algorithms are capable of handling acyclic non-convex meshes and generating images of acceptable quality. SBRC and K-SBRC algorithms have the additional capabilities of rendering cyclic meshes and supporting early ray termination. The proposed algorithms and Koyamada's algorithm are implemented and experimented in a common framework for analyzing their relative performance. © 2003 Elsevier Science Ltd. All rights reserved
    corecore