60 research outputs found

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future

    Time-varying volume visualization

    Get PDF
    Volume rendering is a very active research field in Computer Graphics because of its wide range of applications in various sciences, from medicine to flow mechanics. In this report, we survey a state-of-the-art on time-varying volume rendering. We state several basic concepts and then we establish several criteria to classify the studied works: IVR versus DVR, 4D versus 3D+time, compression techniques, involved architectures, use of parallelism and image-space versus object-space coherence. We also address other related problems as transfer functions and 2D cross-sections computation of time-varying volume data. All the papers reviewed are classified into several tables based on the mentioned classification and, finally, several conclusions are presented.Preprin

    Virtual reality assisted fluorescence microscopy data visualisation and analysis for improved understanding of molecular structures implicated in neurodegenerative diseases

    Get PDF
    Thesis (PhD)--Stellenbosch University, 2020.ENGLISH ABSTRACT: Confocal microscopy is one of the major imaging tools used in molecular life sciences. It delivers detailed three-dimensional data sets and is instrumental in biological analysis and research where structures of interest are labelled using fluorescent probes. Usually, this three-dimensional data is rendered as a projection onto a two-dimensional display. This can however lead to ambiguity in the visual interpretation of the structures of interest in the sample. Furthermore, analysis and region of interest (ROI) selection are also most commonly performed two-dimensionally. This may inadvertently lead to either the exclusion of relevant or the inclusion of irrelevant data points, consequently affecting the accuracy of the analysis. We present a virtual reality (VR) based system that allows firstly, precision region of interest selection and colocalisation analysis, secondly, the spatial visualisation of the correlation of colocalised fluorescence channels and, thirdly, an analysis tool to automatically determine the localisation and presence of mitochondrial fission, fusion and depolarisation. The VR system allows the three-dimensional reconstructed sample data set to be interrogated and analysed in a highly controlled and precise manner, using either fullyimmersive hand-tracking or a conventional handheld controller. We apply this system to the specific task of colocalisation analysis, an important tool in fluorescence microscopy. We evaluate our system interface by means of a set of user trials and show that, despite inaccuracies associated with the hand tracking, it is the most productive and intuitive interface compared to the handheld controller. Applying the VR system to biological sample analysis, we subsequently calculate several key colocalisation metrics using both two-dimensionally and three-dimensionally derived super-resolved structured illumination-based data sets. Using a neuronal injury model, we investigate the change in colocalisation between two proteins of interest, Tau and acetylated -tubulin, under control conditions as well as after 6 hours and again after 24 hours of neuronal injury. Applying the VR based system, we demonstrate the ability to perform precise ROI selections of 3D structures for subsequent colocalisation analysis. We demonstrate that performing colocalisation analysis in three dimensions enhances its sensitivity, leading to a greater number of statistically significant differences than could be established when using two-dimensionally based methods. Next, we propose a novel biological visual analysis method for the qualitative analysis of colocalisation. This method visualises the spatial distribution of the correlation between the underlying fluorescence channel intensities by using a colourmap. This method is evaluated using both synthetic data and biological fluorescence micrographs, demonstrating enhancement of the visualisation in a robust manner by indicating only truly colocalised regions. Mitochondrial fission, fusion and depolarisation events are important in cellular function and viability. However, the quantitative analysis linked to the localisation of each event in the three-dimensional context has not been accomplished. We extend the VR system to analyse fluorescence-based time-lapse sequences of mitochondria and propose a new method to automatically determine the location and quantity of the mitochondrial events. The detected mitochondrial event locations can then be superimposed on the fluorescence z-stacks. We apply this method both to control samples as well as cells that were treated with hydroxychloroquine sulphate (HCQ) and demonstrate how a subsequent quantitative description of the fission/fusion equilibrium as well as the extent of depolarisation can be determined. We conclude that virtual reality offers an attractive and powerful means to extend fluorescence-based microscopy sample navigation, visualisation and analysis. Three-dimensional VR-assisted ROI selection enable samples to be interrogated and assessed with greater precision, thereby exploiting the potential of fluorescence-based image analysis, such as colocalisation, in biomedical research. The automatic localisation and quantification of mitochondrial events can support research of mitochondrial function in healthy and diseased cells, where quantitative analysis of fission, fusion and depolarisation is of importance.AFRIKAANSE OPSOMMING: Konfokale mikroskopie is een van die belangrikste beeldinstrumente wat in die molekulêre lewenswetenskappe gebruik word. Dit lewer gedetailleerde drie-dimensionele datastelle wat instrumenteel is vir biologiese analise en navorsing waar strukture van belang gemerk word met behulp van fluoresserende probes. Gewoonlik word hierdie drie-dimensionele data as ’n projeksie op ’n twee-dimensionele skerm weergegee. Dit kan egter lei tot dubbelsinnigheid in die visuele interpretasie van die struktuur van belang in die biologiese monster. Verder word analise en streek van belang (SVB) seleksie ook meestal tweedimensioneel uitgevoer. Dit kan onbedoeld lei tot die uitsluiting van relevante of die insluiting van irrelevante datapunte, wat die akkuraatheid van die analise kan beïnvloed. Ons bied ’n virtuele realiteit (VR)-gebaseerde stelsel aan wat eerstens presisie SVB seleksie en kolokaliseringsanalise in staat stel, tweedens, die ruimtelike visualisering weergee van die korrelasie van gekolokaliseerde fluoressensie-kanale, en derdens ’n analise-metode vir die outomaties lokalisering van mitochondriale splitsing, samesmelting en depolarisasie. Die VR-stelsel laat toe dat die drie-dimensionele gerekonstrueerde monsterdatastel op ’n hoogs gekontroleerde en akkurate wyse ondersoek en geanaliseer kan word deur gebruik te maak van ’n volledig immersiewe hand-volg toestel óf ’n konvensionele spelbeheerder. Ons pas hierdie stelsel toe op die spesifieke taak van kolokaliseringsanalise, ’n belangrike instrument in fluoressensie-mikroskopie. Ons evalueer die twee stelsel-koppelvlakke aan die hand van ’n stel gebruikersproewe en wys dat die hand-volg toesetel, ondanks onakkuraathede wat met die dit verband hou, die mees produktiewe en intuïtiewe koppelvlak is in vergelyking met die spelbeheerder. Met die toepassing van die VR-stelsel op biologiese monsteranalise, bereken ons daarna verskeie sleutel kolokalisasiemates met behulp van beide twee-dimensionele en drie-dimensionele “super-resolved structured illumination” gebaseerde datastelle. Met behulp van ’n neuronale beseringsmodel ondersoek ons die verandering in kolokalisasie tussen twee proteïene van belang, Tau en geasetileerde -tubulien, onder beheerstoestande sowel as na 6 uur en weer na 24 uur na neuronale besering. Met gebruik van die VR-gebaseerde stelsel, demonstreer ons die vermoë om presiese SVB-seleksies van 3D-strukture uit te voer vir latere kolokaliseringsanalise. Ons demonstreer dat die uitvoering van kolokaliseringsanalise in drie dimensies die sensitiwiteit daarvan verhoog, wat lei tot ’n groter aantal statisties beduidende verskille as wat vasgestel kan word by die gebruik van twee-dimensionele metodes. Vervolgens stel ons ’n nuwe biologiese visuele analise-metode voor vir die kwalitatiewe analise van kolokalisering. Hierdie metode visualiseer die ruimtelike verdeling van die korrelasie tussen die onderliggende fluoressensie-kanaal-intensiteite met behulp van ’n kleurkaart. Hierdie metode word geëvalueer deur gebruik te maak van sintetiese data sowel as biologiese fluoressensiemikrograwe, wat die verbetering van die visualisering op ’n robuuste manier demonstreer deur slegs waarlik gekolokaliseerde streke aan te dui. Mitochondriale splitsing, samesmelting en depolarisasie gebeurtenisse is belangrik vir sellulêre funksie en lewensvatbaarheid. Die kwantitatiewe ontleding gekoppel aan die lokalisering van elke gebeurtenis in die drie-dimensionele konteks is egter nog nie gedoen nie. Ons brei die VR-stelsel uit om fluoressensie-gebaseerde tydsverloop-sekwensies van mitochondria te ontleed en stel ’n nuwe metode voor om outomaties die ligging en hoeveelheid van die mitochondriale gebeure te bepaal. Die waargenome liggings van mitochondriale gebeurtenisse kan dan op die fluoressensie-z-stapels aangebring word. Ons pas hierdie metode toe op beide kontrole monsters sowel as selle wat met hydroxychloroquine sulfaat (HCQ) behandel is en demonstreer hoe ’n daaropvolgende kwantitatiewe beskrywing van die splitsing/samesmelting-ewewig sowel as die omvang van depolarisasie bepaal kan word. Ons kom tot die gevolgtrekking dat virtuele werklikheid ’n aantreklike en kragtige manier bied om fluoressensie-gebaseerde mikroskopie-monsternavigasie, -visualisering en -analise uit te voer. Drie-dimensionele VR-ondersteunde SVB-seleksie laat toe dat monsters met groter noukeurigheid ondersoek en beoordeel kan word en sodoende die potensiaal van fluoressensie-gebaseerde beeldanalise, soos kolokalisering, in biomediese navorsing te benut. Die outomatiese lokalisering en kwantifisering van mitochondriale gebeure kan die navorsing van mitochondriale funksie in gesonde en siek selle ondersteun, waar kwantitatiewe analise van splitsing, samesmelting en depolarisasie van belang is.Doctora

    Interactive simulation and rendering of fluids on graphics hardware

    Get PDF
    Computational uid dynamics can be used to reproduce the complex motion of fluids for use in computer graphics, but the simulation and rendering are both highly computationally intensive. In the past performing these tasks on the CPU could take many minutes per frame, especially for large scale scenes at high levels of detail, which limited their usage to offline applications such as in film and media. However, using the massive parallelism of GPUs, it is nowadays possible to produce uid visual effects in real time for interactive applications such as games. We present such an interactive simulation using the CUDA GPU computing environment and OpenGL graphics API. Smoothed Particle Hydrodynamics (SPH) is a popular particle-based fluid simulation technique that has been shown to be well suited to acceleration on the GPU. Our work extends an existing GPU-based SPH implementation by incorporating rigid body interaction and rendering. Solid objects are represented using particles to accumulate hydrodynamic forces from surrounding fluid, while motion and collision handling are handled by the Bullet Physics library on the CPU. Our system demonstrates two-way coupling with multiple objects floating, displacing fluid and colliding with each other. For rendering we compare the performance and memory consumption of two approaches, splatting and raycasting, we also describe the visual characteristics of each. In our evaluation we consider a target of between 24 and 30 fps to be sufficient for smooth interaction and aim to determine the performance impact of our new features. We begin by establishing a performance baseline and find that the original system runs smoothly up to 216,000 fluid particles but after introducing rendering this drops to 27,000 particles with the rendering taking up the majority of the frame time in both techniques. We find that the most significant limiting factor to splatting performance to be the onscreen area occupied by fluid while the raycasting performance is primarily determined by the resolution of the 3D texture used for sampling. Finally we find that performing solid interaction on the CPU is a viable approach that does not introduce significant overhead unless solid particles vastly outnumber fluid ones

    GPU-based volume deformation.

    Get PDF

    Multilayer representation for geological information systems

    Get PDF
    En esta tesis se propone el uso de la Representación de Terrenos Basada en Stacks (SBRT, de sus siglas en inglés) para datos geológicos volumétricos. Esta estructura de datos codifica estructuras geológicas representadas como stacks utilizando una compacta representación de datos. A continuación, hemos formalizado la SBRT con un esquema basado en la teoría de geo-átomos para proporcionar una definición precisa y determinar sus propiedades. Esta tesis también introduce una nueva estructura de datos llamada QuadStack, mejorando los resultados de compresión proporcionados por la SBRT al aprovechar la redundancia de información que a menudo se encuentra en los datos distribuidos por capas. También se han proporcionado métodos de visualización para estas representaciones basados en el conocido algoritmo de visualización raycasting. Al mantener los datos en todo momento en la memoria de la GPU de forma compacta, los métodos propuestos son lo suficientemente rápidos como para proporcionar velocidades de visualización interactivas.In this thesis we propose the use of the Stack-Based Representation of Terrains (SBRT) for volumetric geological data. This data structure encodes geological structures represented as stacks using a compact data representation. The SBRT is further formalized with a framework based on the geo-atom theory to provide a precise definition and determine its properties. Also, we introduce QuadStacks, a novel data structure that improves the compression results provided by the SBRT, by exploiting in its data arrangement the redundancy often found in layered dataset. This thesis also provides direct visualization methods for the SBR and QuadStacks based on the well-known raycasting algorithm. By keeping the whole dataset in the GPU in a compact way, the methods are fast enough to provide real-time frame rates.Tesis Univ. Jaén. Departamento de Informática. Leída el 19 de septiembre de 2019

    GPU-friendly marching cubes.

    Get PDF
    Xie, Yongming.Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.Includes bibliographical references (leaves 77-85).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.iiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Isosurfaces --- p.1Chapter 1.2 --- Graphics Processing Unit --- p.2Chapter 1.3 --- Objective --- p.3Chapter 1.4 --- Contribution --- p.3Chapter 1.5 --- Thesis Organization --- p.4Chapter 2 --- Marching Cubes --- p.5Chapter 2.1 --- Introduction --- p.5Chapter 2.2 --- Marching Cubes Algorithm --- p.7Chapter 2.3 --- Triangulated Cube Configuration Table --- p.12Chapter 2.4 --- Summary --- p.16Chapter 3 --- Graphics Processing Unit --- p.18Chapter 3.1 --- Introduction --- p.18Chapter 3.2 --- History of Graphics Processing Unit --- p.19Chapter 3.2.1 --- First Generation GPU --- p.20Chapter 3.2.2 --- Second Generation GPU --- p.20Chapter 3.2.3 --- Third Generation GPU --- p.20Chapter 3.2.4 --- Fourth Generation GPU --- p.21Chapter 3.3 --- The Graphics Pipelining --- p.21Chapter 3.3.1 --- Standard Graphics Pipeline --- p.21Chapter 3.3.2 --- Programmable Graphics Pipeline --- p.23Chapter 3.3.3 --- Vertex Processors --- p.25Chapter 3.3.4 --- Fragment Processors --- p.26Chapter 3.3.5 --- Frame Buffer Operations --- p.28Chapter 3.4 --- GPU CPU Analogy --- p.31Chapter 3.4.1 --- Memory Architecture --- p.31Chapter 3.4.2 --- Processing Model --- p.32Chapter 3.4.3 --- Limitation of GPU --- p.33Chapter 3.4.4 --- Input and Output --- p.34Chapter 3.4.5 --- Data Readback --- p.34Chapter 3.4.6 --- FramebufFer --- p.34Chapter 3.5 --- Summary --- p.35Chapter 4 --- Volume Rendering --- p.37Chapter 4.1 --- Introduction --- p.37Chapter 4.2 --- History of Volume Rendering --- p.38Chapter 4.3 --- Hardware Accelerated Volume Rendering --- p.40Chapter 4.3.1 --- Hardware Acceleration Volume Rendering Methods --- p.41Chapter 4.3.2 --- Proxy Geometry --- p.42Chapter 4.3.3 --- Object-Aligned Slicing --- p.43Chapter 4.3.4 --- View-Aligned Slicing --- p.45Chapter 4.4 --- Summary --- p.48Chapter 5 --- GPU-Friendly Marching Cubes --- p.49Chapter 5.1 --- Introduction --- p.49Chapter 5.2 --- Previous Work --- p.50Chapter 5.3 --- Traditional Method --- p.52Chapter 5.3.1 --- Scalar Volume Data --- p.53Chapter 5.3.2 --- Isosurface Extraction --- p.53Chapter 5.3.3 --- Flow Chart --- p.54Chapter 5.3.4 --- Transparent Isosurfaces --- p.56Chapter 5.4 --- Our Method --- p.56Chapter 5.4.1 --- Cell Selection --- p.59Chapter 5.4.2 --- Vertex Labeling --- p.61Chapter 5.4.3 --- Cell Indexing --- p.62Chapter 5.4.4 --- Interpolation --- p.65Chapter 5.5 --- Rendering Translucent Isosurfaces --- p.67Chapter 5.6 --- Implementation and Results --- p.69Chapter 5.7 --- Summary --- p.74Chapter 6 --- Conclusion --- p.76Bibliography --- p.7

    Frame-to-frame coherent image-aligned sheet-buffered splatting

    Get PDF
    Splatting is a classical volume rendering technique that has recently gained in popularity for the visualization of point-based suface models. Up to now, there has been few publications on its adaptation to time-varying data. In this paper, we propose a novel frame-to-frame coherent view-aligned sheet-buffer splatting of time-varying data, that tries to reduce as much as possible the memory load and the rendering computations taking into account the similarity in the data and in the images at successive instants of time. The results presented in the paper are encouraging and show that the proposed technique may be useful to explore data through time.Postprint (published version

    Natural ventilation design attributes application effect on, indoor natural ventilation performance of a double storey, single unit residential building

    Get PDF
    In establishing a good indoor thermal condition, air movement is one of the important parameter to be considered to provide indoor fresh air for occupants. Due to the public awareness on environment impact, people has been increasingly attentive to passive design in achieving good condition of indoor building ventilation. Throughout case studies, significant building attributes were found giving effect on building indoor natural ventilation performance. The studies were categorized under vernacular houses, contemporary houses with vernacular element and contemporary houses. The indoor air movement of every each spaces in the houses were compared with the outdoor air movement surrounding the houses to indicate the space’s indoor natural ventilation performance. Analysis found the wind catcher element appears to be the most significant attribute to contribute most to indoor natural ventilation. Wide opening was also found to be significant especially those with louvers. Whereas it is also interesting to find indoor layout design is also significantly giving impact on the performance. The finding indicates that a good indoor natural ventilation is not only dictated by having proper openings at proper location of a building, but also on how the incoming air movement is managed throughout the interior spaces by proper layout. Understanding on the air pressure distribution caused by indoor windward and leeward side is important in directing the air flow to desired spaces in producing an overall good indoor natural ventilation performance
    • …
    corecore