34 research outputs found

    Time-Critical Volume Rendering

    Get PDF
    For the past twelve months, we have conducted and completed a joint research entitled "Time- Critical Volume Rendering" with NASA Ames. As expected, High performance volume rendering algorithms have been developed by exploring some new faster rendering techniques, including object presence acceleration, parallel processing, and hierarchical level-of-detail representation. Using our new techniques, initial experiments have achieved real-time rendering rates of more than 10 frames per second of various 3D data sets with highest resolution. A couple of joint papers and technique reports as well as an interactive real-time demo have been compiled as the result of this project

    GPU-based volume deformation.

    Get PDF

    Fast Volume Rendering and Deformation Algorithms

    Full text link
    Volume rendering is a technique for simultaneous visualization of surfaces and inner structures of objects. However, the huge number of volume primitives (voxels) in a volume, leads to high computational cost. In this dissertation I developed two algorithms for the acceleration of volume rendering and volume deformation. The first algorithm accelerates the ray casting of volume. Previous ray casting acceleration techniques like space-leaping and early-ray-termination are only efficient when most voxels in a volume are either opaque or transparent. When many voxels are semi-transparent, the rendering time will increase considerably. Our new algorithm improves the performance of ray casting of semi-transparently mapped volumes by exploiting the opacity coherency in object space, leading to a speedup factor between 1.90 and 3.49 in rendering semi-transparent volumes. The acceleration is realized with the help of pre-computed coherency distances. We developed an efficient algorithm to encode the coherency information, which requires less than 12 seconds for data sets with about 8 million voxels. The second algorithm is for volume deformation. Unlike the traditional methods, our method incorporates the two stages of volume deformation, i.e. deformation and rendering, into a unified process. Instead to deform each voxel to generate an intermediate deformed volume, the algorithm follows inversely deformed rays to generate the desired deformation. The calculations and memory for generating the intermediate volume are thus saved. The deformation continuity is achieved by adaptive ray division which matches the amplitude of local deformation. We proposed approaches for shading and opacit adjustment which guarantee the visual plausibility of deformation results. We achieve an additional deformation speedup factor of 2.34~6.58 by incorporating early-ray-termination, space-leaping and the coherency acceleration technique in the new deformation algorithm

    Real-time GPU-accelerated Out-of-Core Rendering and Light-field Display Visualization for Improved Massive Volume Understanding

    Get PDF
    Nowadays huge digital models are becoming increasingly available for a number of different applications ranging from CAD, industrial design to medicine and natural sciences. Particularly, in the field of medicine, data acquisition devices such as MRI or CT scanners routinely produce huge volumetric datasets. Currently, these datasets can easily reach dimensions of 1024^3 voxels and datasets larger than that are not uncommon. This thesis focuses on efficient methods for the interactive exploration of such large volumes using direct volume visualization techniques on commodity platforms. To reach this goal specialized multi-resolution structures and algorithms, which are able to directly render volumes of potentially unlimited size are introduced. The developed techniques are output sensitive and their rendering costs depend only on the complexity of the generated images and not on the complexity of the input datasets. The advanced characteristics of modern GPGPU architectures are exploited and combined with an out-of-core framework in order to provide a more flexible, scalable and efficient implementation of these algorithms and data structures on single GPUs and GPU clusters. To improve visual perception and understanding, the use of novel 3D display technology based on a light-field approach is introduced. This kind of device allows multiple naked-eye users to perceive virtual objects floating inside the display workspace, exploiting the stereo and horizontal parallax. A set of specialized and interactive illustrative techniques capable of providing different contextual information in different areas of the display, as well as an out-of-core CUDA based ray-casting engine with a number of improvements over current GPU volume ray-casters are both reported. The possibilities of the system are demonstrated by the multi-user interactive exploration of 64-GVoxel datasets on a 35-MPixel light-field display driven by a cluster of PCs. ------------------------------------------------------------------------------------------------------ Negli ultimi anni si sta verificando una proliferazione sempre più consistente di modelli digitali di notevoli dimensioni in campi applicativi che variano dal CAD e la progettazione industriale alla medicina e le scienze naturali. In modo particolare, nel settore della medicina, le apparecchiature di acquisizione dei dati come RM o TAC producono comunemente dei dataset volumetrici di grosse dimensioni. Questi dataset possono facilmente raggiungere taglie dell’ordine di 10243 voxels e dataset di dimensioni maggiori possono essere frequenti. Questa tesi si focalizza su metodi efficienti per l’esplorazione di tali grossi volumi utilizzando tecniche di visualizzazione diretta su piattaforme HW di diffusione di massa. Per raggiungere tale obiettivo si introducono strutture specializzate multi-risoluzione e algoritmi in grado di visualizzare volumi di dimensioni potenzialmente infinite. Le tecniche sviluppate sono “ouput sensitive” e la loro complessità di rendering dipende soltanto dalle dimensioni delle immagini generate e non dalle dimensioni dei dataset di input. Le caratteristiche avanzate delle architetture moderne GPGPU vengono inoltre sfruttate e combinate con un framework “out-of-core” in modo da offrire una implementazione di questi algoritmi e strutture dati più flessibile, scalabile ed efficiente su singole GPU o cluster di GPU. Per migliorare la percezione visiva e la comprensione dei dati, viene introdotto inoltre l’uso di tecnologie di display 3D di nuova generazione basate su un approccio di tipo light-field. Questi tipi di dispositivi consentono a diversi utenti di percepire ad occhio nudo oggetti che galleggiano all’interno dello spazio di lavoro del display, sfruttando lo stereo e la parallasse orizzontale. Si descrivono infine un insieme di tecniche illustrative interattive in grado di fornire diverse informazioni contestuali in diverse zone del display, così come un motore di “ray-casting out-of-core” basato su CUDA e contenente una serie di miglioramenti rispetto agli attuali metodi GPU di “ray-casting” di volumi. Le possibilità del sistema sono dimostrate attraverso l’esplorazione interattiva di dataset di 64-GVoxel su un display di tipo light-field da 35-MPixel pilotato da un cluster di PC

    Exploiting spatial and temporal coherence in GPU-based volume rendering

    Full text link
    Effizienz spielt eine wichtige Rolle bei der Darstellung von Volumendaten, selbst wenn leistungsstarke Grafikhardware zur Verfügung steht, da steigende Datensatzgrößen und höhere Anforderungen an Visualisierungstechniken Fortschritte bei Grafikprozessoren ausgleichen. In dieser Dissertation wird untersucht, wie räumliche und zeitliche Kohärenz in Volumendaten zur Optimierung von Volumenrendering genutzt werden kann. Es werden mehrere neue Ansätze für statische und zeitvariante Daten eingeführt, die verschieden Arten von Kohärenz in verschiedenen Stufen der Volumenrendering-Pipeline ausnutzen. Zu den vorgestellten Beschleunigungstechniken gehört Empty Space Skipping mittels Occlusion Frustums, eine auf Slabs basierende Cachestruktur für Raycasting und ein verlustfreies Kompressionsscheme für zeitvariante Daten. Die Algorithmen wurden zur Verwendung mit GPU-basiertem Volumen-Raycasting entworfen und nutzen die Fähigkeiten moderner Grafikprozessoren, insbesondere Stream Processing. Efficiency is a key aspect in volume rendering, even if powerful graphics hardware is employed, since increasing data set sizes and growing demands on visualization techniques outweigh improvements in graphics processor performance. This dissertation examines how spatial and temporal coherence in volume data can be used to optimize volume rendering. Several new approaches for static as well as for time-varying data sets are introduced, which exploit different types of coherence in different stages of the volume rendering pipeline. The presented acceleration algorithms include empty space skipping using occlusion frustums, a slab-based cache structure for raycasting, and a lossless compression scheme for time-varying data. The algorithms were designed for use with GPU-based volume raycasting and to efficiently exploit the features of modern graphics processors, especially stream processing

    Fast imaging in non-standard X-ray computed tomography geometries

    Get PDF

    High performance computer simulated bronchoscopy with interactive navigation.

    Get PDF
    by Ping-Fu Fung.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 98-102).Abstract also in Chinese.Abstract --- p.ivAcknowledgements --- p.viChapter 1 --- Introduction --- p.1Chapter 1.1 --- Medical Visualization System --- p.4Chapter 1.1.1 --- Data Acquisition --- p.4Chapter 1.1.2 --- Computer-aided Medical Visualization --- p.5Chapter 1.1.3 --- Existing Systems --- p.6Chapter 1.2 --- Research Goal --- p.8Chapter 1.2.1 --- System Architecture --- p.9Chapter 1.3 --- Organization of this Thesis --- p.10Chapter 2 --- Volume Visualization --- p.11Chapter 2.1 --- Sampling Grid and Volume Representation --- p.11Chapter 2.2 --- Priori Work in Volume Rendering --- p.13Chapter 2.2.1 --- Surface VS Direct --- p.14Chapter 2.2.2 --- Image-order VS Object-order --- p.18Chapter 2.2.3 --- Orthogonal VS Perspective --- p.22Chapter 2.2.4 --- Hardware Acceleration VS Software Acceleration --- p.23Chapter 2.3 --- Chapter Summary --- p.29Chapter 3 --- IsoRegion Leaping Technique for Perspective Volume Rendering --- p.30Chapter 3.1 --- Compositing Projection in Direct Volume Rendering --- p.31Chapter 3.2 --- IsoRegion Leaping Acceleration --- p.34Chapter 3.2.1 --- IsoRegion Definition --- p.35Chapter 3.2.2 --- IsoRegion Construction --- p.37Chapter 3.2.3 --- IsoRegion Step Table --- p.38Chapter 3.2.4 --- Ray Traversal Scheme --- p.41Chapter 3.3 --- Experiment Result --- p.43Chapter 3.4 --- Improvement --- p.47Chapter 3.5 --- Chapter Summary --- p.48Chapter 4 --- Parallel Volume Rendering by Distributed Processing --- p.50Chapter 4.1 --- Multi-platform Loosely-coupled Parallel Environment Shell --- p.51Chapter 4.2 --- Distributed Rendering Pipeline (DRP) --- p.55Chapter 4.2.1 --- Network Architecture of a Loosely-Coupled System --- p.55Chapter 4.2.2 --- Data and Task Partitioning --- p.58Chapter 4.2.3 --- Communication Pattern and Analysis --- p.59Chapter 4.3 --- Load Balancing --- p.69Chapter 4.4 --- Heterogeneous Rendering --- p.72Chapter 4.5 --- Chapter Summary --- p.73Chapter 5 --- User Interface --- p.74Chapter 5.1 --- System Design --- p.75Chapter 5.2 --- 3D Pen Input Device --- p.76Chapter 5.3 --- Visualization Environment Integration --- p.77Chapter 5.4 --- User Interaction: Interactive Navigation --- p.78Chapter 5.4.1 --- Camera Model --- p.79Chapter 5.4.2 --- Zooming --- p.81Chapter 5.4.3 --- Image View --- p.82Chapter 5.4.4 --- User Control --- p.83Chapter 5.5 --- Chapter Summary --- p.87Chapter 6 --- Conclusion --- p.88Chapter 6.1 --- Final Summary --- p.88Chapter 6.2 --- Deficiency and Improvement --- p.89Chapter 6.3 --- Future Research Aspect --- p.91Appendix --- p.93Chapter A --- Common Error in Pre-multiplying Color and Opacity --- p.94Chapter B --- Binary Factorization of the Sample Composition Equation --- p.9

    Automation of the Monte Carlo simulation of medical linear accelerators

    Get PDF
    La consulta íntegra de la tesi, inclosos els articles no comunicats públicament per drets d'autor, es pot realitzar prèvia petició a l'Arxiu de la UPCThe main result of this thesis is a software system, called PRIMO, which simulates clinical linear accelerators and the subsequent dose distributions using the Monte Carlo method. PRIMO has the following features: (i) it is self- contained, that is, it does not require additional software libraries or coding; (ii) it includes a geometry library with most Varian and Elekta linacs; (iii) it is based on the general-purpose Monte Carlo code PENELOPE; (iv) it provides a suite of variance-reduction techniques and distributed parallel computing to enhance the simulation efficiency; (v) it is graphical user interfaced; and (vi) it is freely distributed through the website http://www.primoproject.net In order to endow PRIMO with these features the following tasks were conducted: - PRIMO was conceived with a layered structure. The topmost layer, named the GLASS, was developed in this thesis. The GLASS implements the GUI, drives all the functions of the system and performs the analysis of results. Lower layers generate geometry files, provide input data and execute the Monte Carlo simulation. - The geometry of Elekta linacs from series SU and MLCi were coded in the PRIMO system. - A geometrical model of the Varian True Beam linear accelerator was developed and validated. This model was created to surmount the limitations of the Varian distributed phase-space files and the absence of released information about the actual geometry of that machine. This geometry model was incorporated into PRIMO. - Two new variance-reduction techniques, named splitting roulette and selective splitting, were developed and validated. In a test made with an Elekta linac it was found that when both techniques are used in conjunction the simulation efficiency improves by a factor of up to 45. - A method to automatically distribute the simulation among the available CPU cores of a computer was implemented. The following investigations were done using PRIMO as a research tool : - The configu ration of the condensed history transport algorithm for charged particles in PENELOPE was optimized for linac simulation. Dose distributions in the patient were found to be particularly sensitive to the values of the transport parameters in the linac target. Use of inadequate values of these parameters may lead to an incorrect determination of the initial beam configuration or to biased dose distributions. - PRIMO was used to simulate phase-space files distributed by Varian for the True Beam linac. The results were compared with experimental data provided by five European radiotherapycenters. It was concluded thatthe latent variance and the accuracy of the phase-space files were adequate for the routine clinical practice. However, for research purposes where low statistical uncertainties are required the phase-space files are not large enough. To the best of our knowledge PRIMO is the only fully Monte Carlo-based linac and dose simulation system , addressed to research and dose verification, that does not require coding tasks from end users and is publicly available.El principal resultado de esta tesis es un sistema informático llamado PRIMO el cual simula aceleradores lineales médicos y las subsecuentes distribuciones de dosis empleando el método de Monte Carlo. PRIMO tiene las siguiente características: (i) es auto contenido, o sea no requiere de librerías de código ni de programación adicional ; (ii) incluye las geometrías de los principales modelos de aceleradores Varían y Elekta; (iii) está basado en el código Monte Carlo de propósitos generales PENELOPE; (iv) contiene un conjunto de técnicas de reducción de varianza y computación paralela distribuida para mejorar la eficiencia de simulación; (v) tiene una interfaz gráfica de usuario; y (vi) se distribuye gratis en el sitio web http://vvww.primoproject.net. Para dotar a PRIMO de esas características, se realizaron las tareas siguientes: - PRIMO se concibió con una estructura de capas. La capa superior, nombrada GLASS, fue desarrollada en esta tesis. GLASS implementa la interfazgráfica de usuario, controla todas las funciones del sistema y realiza el análisis de resultados. Las capas inferiores generan los archivos de geometría y otros datos de entrada y ejecutan la simulación Monte Carlo. - Se codificó en el sistema PRIMO la geometría de los aceleradores Elekta de las series SLi y MLC. - Se desarrolló y validó un modelo geométrico del acelerador TrueBeam de Varian. Este modelo fue creado para superar las limitaciones de los archivos de espacio de fase distribuidos por Varian, así como la ausencia de información sobre la geometría real de esta máquina. Este modelo geométrico fue incorporado en PRIMO. - Fueron desarrolladas y validadas dos nuevas técnicas de reducción de varianza nombradas splitting roulette y selective splitting. En pruebas hechas en un acelerador Elekta se encontró que cuando ambas técnicas se usan en combinación, la eficiencia de simulación mejora 45 veces. - Se implementó un método para distribuir la simulación entre los procesadores disponibles en un ordenador. Las siguientes investigaciones fueron realizadas usando PRIMO como herramienta: - Fue optimizada la configuración del algoritmo de PENELOPE para el transporte de partículas cargadas con historia condensada en la simulación del linac. Se encontró que las distribuciones de dosis en el paciente son particularmente sensibles a los valores de los parámetros de transporte usados para el target del linac. El uso de va lores inadecuados para esos parámetros puede conducir a una incorrecta determinación de la configuración del haz inicial o producir sesgos en las distribuciones de dosis. - Se utilizó PRIMO para simular archivos de espacios de fase distribuidos por Varian para el linac TrueBeam. Los resultados se compararon con datos experimentales aportados por cinco centros de radioterapia europeos. Se concluyó que la varianza latente y la exactitud de estos espacios de fase son adecuadas para la práctica clínica de rutina. Sin embargo estos espacios de fase no son suficientemente grandes para emplearse en investigaciones que requieren alcanzar una baja incertidumbre estadística. Hasta donde conocemos, PRIMO es el único sistema Monte Carlo que simula completamente el acelerador lineal y calcula la dosis absorbida, dirigido a la investigación y la verificación de dosis que no requiere del usuario tareas de codificación y está disponible públicamentePostprint (published version
    corecore