23 research outputs found

    Interactive Multi-volume Visualization

    Full text link
    Abstract. This paper is concerned with simultaneous visualization of two or more volumes, which may be from different imaging modalities or numerical simulations for the same subject of study. The main visualization challenge is to establish visual correspondences while maintaining distinctions among multiple volumes. One solution is to use different rendering styles for different volumes. Interactive rendering is required so the user can choose with ease an appropriate rendering style and its associated parameters for each volume. Rendering effi-ciency is maximized by utilizing commodity graphics cards. We demonstrate our preliminary results with two case studies.

    Validating Stereoscopic Volume Rendering

    Get PDF
    The evaluation of stereoscopic displays for surface-based renderings is well established in terms of accurate depth perception and tasks that require an understanding of the spatial layout of the scene. In comparison direct volume rendering (DVR) that typically produces images with a high number of low opacity, overlapping features is only beginning to be critically studied on stereoscopic displays. The properties of the specific images and the choice of parameters for DVR algorithms make assessing the effectiveness of stereoscopic displays for DVR particularly challenging and as a result existing literature is sparse with inconclusive results. In this thesis stereoscopic volume rendering is analysed for tasks that require depth perception including: stereo-acuity tasks, spatial search tasks and observer preference ratings. The evaluations focus on aspects of the DVR rendering pipeline and assess how the parameters of volume resolution, reconstruction filter and transfer function may alter task performance and the perceived quality of the produced images. The results of the evaluations suggest that the transfer function and choice of recon- struction filter can have an effect on the performance on tasks with stereoscopic displays when all other parameters are kept consistent. Further, these were found to affect the sensitivity and bias response of the participants. The studies also show that properties of the reconstruction filters such as post-aliasing and smoothing do not correlate well with either task performance or quality ratings. Included in the contributions are guidelines and recommendations on the choice of pa- rameters for increased task performance and quality scores as well as image based methods of analysing stereoscopic DVR images

    Implementation of a sort-last volume rendering using 3D textures

    Get PDF
    La tesi consiste in una estensione della libreria grafica Aura (licenza gpl, sviluppata dalla Vrije Universiteit di Amsterdam) aggiungendo i componenti necessari ad effettuare un volume rendering distribuito secondo il paradigma sort-last ed usando le texture 3D. Il programma è stato testato su un cluster di 9 PC

    NeuDA: Neural Deformable Anchor for High-Fidelity Implicit Surface Reconstruction

    Full text link
    This paper studies implicit surface reconstruction leveraging differentiable ray casting. Previous works such as IDR and NeuS overlook the spatial context in 3D space when predicting and rendering the surface, thereby may fail to capture sharp local topologies such as small holes and structures. To mitigate the limitation, we propose a flexible neural implicit representation leveraging hierarchical voxel grids, namely Neural Deformable Anchor (NeuDA), for high-fidelity surface reconstruction. NeuDA maintains the hierarchical anchor grids where each vertex stores a 3D position (or anchor) instead of the direct embedding (or feature). We optimize the anchor grids such that different local geometry structures can be adaptively encoded. Besides, we dig into the frequency encoding strategies and introduce a simple hierarchical positional encoding method for the hierarchical anchor structure to flexibly exploit the properties of high-frequency and low-frequency geometry and appearance. Experiments on both the DTU and BlendedMVS datasets demonstrate that NeuDA can produce promising mesh surfaces.Comment: Accepted to CVPR 2023, project page: https://3d-front-future.github.io/neud

    NERF FOR HERITAGE 3D RECONSTRUCTION

    Get PDF
    Conventional or learning-based 3D reconstruction methods from images have clearly shown their potential for 3D heritage documentation. Nevertheless, Neural Radiance Field (NeRF) approaches are recently revolutionising the way a scene can be rendered or reconstructed in 3D from a set of oriented images. Therefore the paper wants to review some of the last NeRF methods applied to various cultural heritage datasets collected with smartphone videos, touristic approaches or reflex cameras. Firstly several NeRF methods are evaluated. It turned out that Instant-NGP and Nerfacto methods achieved the best outcomes, outperforming all other methods significantly. Successively qualitative and quantitative analyses are performed on various datasets, revealing the good performances of NeRF methods, in particular for areas with uniform texture or shining surfaces, as well as for small datasets of lost artefacts. This is for sure opening new frontiers for 3D documentation, visualization and communication purposes of digital heritage

    Visualisierung zweidimensionaler Volumen

    Get PDF
    In dieser Arbeit wird ein neues Verfahren zur Visualisierung zweidimensionaler Volumen vorgestellt. Der Begriff multidimensionales Volumen wird dabei definiert als eine Menge von räumlich dreidimensionalen Datensätzen, die jeder eine andere Eigenschaft (eine physikalische Qualität, z.B. Dichte oder Temperatur) desselben Objekts beschreiben. Zweidimensionale Volumen beschreiben also zwei verschiedene Eigenschaften eines Objekts. Sie entstehen z.B. in biomedizinischen Anwendungen, wenn gleichzeitig funktionale und anatomische Datensätze untersucht werden. Zunächst wird der Stand der Technik in der Visualisierung zweidimensionaler Volumen dargelegt. Dabei sind besonders die folgenden Schwächen bestehender Verfahren erkennbar: - Schlechte räumliche Darstellung und schlechte Lokalisierbarkeit von Ausprägungen (bemerkenswerte Quantitäten einer Eigenschaft an einer Stelle). Beschränkung auf Datensätze aus speziellen Quellen oder spezielle Kombinationen von Datensätzen. - Prinzipbedingte Beschränkung einer Eigenschaft auf wenige kleine Regionen innerhalb der anderen Eigenschaft. Basierend auf diesen Defiziten werden die Anforderungen für ein besseres Visualisierungsverfahren herausgearbeitet, anhand derer ein neues Verfahren, dependent rendering genannt, entwickelt wird. Das Verfahren basiert auf der Annahme, dass bei der Visualisierung mehrerer Eigenschaften immer eine Eigenschaft als Referenz zur Lokalisierung dienen kann. Abhängig von der ersten kann eine weitere Eigenschaft visualisiert werden. Es werden drei Implementierungen des Verfahrens vorgestellt, die ersten beiden sind Prototypen, die dritte eine spezialisierte Anwendung für eine biomedizinische Visualisierungsplattform. Die Implementierungen veranschaulichen, dass sich das vorgestellte Verfahren gegenüber bestehenden Ansätzen besonders durch folgende Punkte auszeichnet: - Gute Lokalisierbarkeit von Ausprägungen bei gleichzeitiger guter räumlicher Darstellung des Objekts (z.B.: "Ist es auf der Oberfläche heiss oder innerhalb des Objekts?"). - Gleiche räumliche Ausdehnung beider Datensätze möglich. - Genereller Ansatz: Keine Beschränkung auf Datensätze aus speziellen Quellen oder auf spezielle Kombinationen von Datensätzen. Das vorgestellte Verfahren stellt daher einen bedeutenden Fortschritt in der Technik der Visualisierung zweier Eigenschaften eines Objekts dar.In this thesis, a new technique for the visualisation of two-dimensional volumes is presented. The term multi-dimensional volume is defined as a set of spatially three-dimensional data sets, each of them describing another property (a physical quality, e.g. density or temperature) of the same object. Thus, two-dimensional volumes describe two different properties of an object. They are used e.g. in biomedical imaging, where anatomical and functional data are examined jointly. First, the state of the art in the visualisation of two-dimensional volumes is presented. In the course of this, the following deficiencies of existing approaches become apparent: - Unsatisfactory 3D impression (it is difficult to mentally reconstruct the spatially three-dimensional object from the rendering) and difficult localisation of features (i.e. remarkable characteristics in the quantity of a property at a given location). - Restriction to data sets from particular origins or particular combinations of data sets. - By design, one property is restricted to only a few small regions inside the other property. Starting from these deficiencies, the requirements for a visualisation technique that overcomes these limitations are elaborated. These are then used to develop a new technique, called dependent rendering, which is based on the assumption that, when visualising two properties of an object, there is alway one property that can serve as a spatial reference for the other. The other property is then visualised in dependency on this reference. Three implementations of the technique are presented, the first two are prototypes, the third one is a specialised application for a biomedical visualisation platform. The implementations show that, compared to existing approaches, the presented technique especially stands out because of the following features: - Precise localisation of features combined with good 3D impression of the object (e.g. "Is it hot on the surface or only inside the object?"). - Both data sets can be extended over the same region. - General approach: No restriction to data sets from particular origins or particular combinations of data sets. The presented technique therefore represents an important advancement in the joint visualisation of two properties of an object

    Real-time GPU-accelerated Out-of-Core Rendering and Light-field Display Visualization for Improved Massive Volume Understanding

    Get PDF
    Nowadays huge digital models are becoming increasingly available for a number of different applications ranging from CAD, industrial design to medicine and natural sciences. Particularly, in the field of medicine, data acquisition devices such as MRI or CT scanners routinely produce huge volumetric datasets. Currently, these datasets can easily reach dimensions of 1024^3 voxels and datasets larger than that are not uncommon. This thesis focuses on efficient methods for the interactive exploration of such large volumes using direct volume visualization techniques on commodity platforms. To reach this goal specialized multi-resolution structures and algorithms, which are able to directly render volumes of potentially unlimited size are introduced. The developed techniques are output sensitive and their rendering costs depend only on the complexity of the generated images and not on the complexity of the input datasets. The advanced characteristics of modern GPGPU architectures are exploited and combined with an out-of-core framework in order to provide a more flexible, scalable and efficient implementation of these algorithms and data structures on single GPUs and GPU clusters. To improve visual perception and understanding, the use of novel 3D display technology based on a light-field approach is introduced. This kind of device allows multiple naked-eye users to perceive virtual objects floating inside the display workspace, exploiting the stereo and horizontal parallax. A set of specialized and interactive illustrative techniques capable of providing different contextual information in different areas of the display, as well as an out-of-core CUDA based ray-casting engine with a number of improvements over current GPU volume ray-casters are both reported. The possibilities of the system are demonstrated by the multi-user interactive exploration of 64-GVoxel datasets on a 35-MPixel light-field display driven by a cluster of PCs. ------------------------------------------------------------------------------------------------------ Negli ultimi anni si sta verificando una proliferazione sempre più consistente di modelli digitali di notevoli dimensioni in campi applicativi che variano dal CAD e la progettazione industriale alla medicina e le scienze naturali. In modo particolare, nel settore della medicina, le apparecchiature di acquisizione dei dati come RM o TAC producono comunemente dei dataset volumetrici di grosse dimensioni. Questi dataset possono facilmente raggiungere taglie dell’ordine di 10243 voxels e dataset di dimensioni maggiori possono essere frequenti. Questa tesi si focalizza su metodi efficienti per l’esplorazione di tali grossi volumi utilizzando tecniche di visualizzazione diretta su piattaforme HW di diffusione di massa. Per raggiungere tale obiettivo si introducono strutture specializzate multi-risoluzione e algoritmi in grado di visualizzare volumi di dimensioni potenzialmente infinite. Le tecniche sviluppate sono “ouput sensitive” e la loro complessità di rendering dipende soltanto dalle dimensioni delle immagini generate e non dalle dimensioni dei dataset di input. Le caratteristiche avanzate delle architetture moderne GPGPU vengono inoltre sfruttate e combinate con un framework “out-of-core” in modo da offrire una implementazione di questi algoritmi e strutture dati più flessibile, scalabile ed efficiente su singole GPU o cluster di GPU. Per migliorare la percezione visiva e la comprensione dei dati, viene introdotto inoltre l’uso di tecnologie di display 3D di nuova generazione basate su un approccio di tipo light-field. Questi tipi di dispositivi consentono a diversi utenti di percepire ad occhio nudo oggetti che galleggiano all’interno dello spazio di lavoro del display, sfruttando lo stereo e la parallasse orizzontale. Si descrivono infine un insieme di tecniche illustrative interattive in grado di fornire diverse informazioni contestuali in diverse zone del display, così come un motore di “ray-casting out-of-core” basato su CUDA e contenente una serie di miglioramenti rispetto agli attuali metodi GPU di “ray-casting” di volumi. Le possibilità del sistema sono dimostrate attraverso l’esplorazione interattiva di dataset di 64-GVoxel su un display di tipo light-field da 35-MPixel pilotato da un cluster di PC

    Ultrasound-Augmented Laparoscopy

    Get PDF
    Laparoscopic surgery is perhaps the most common minimally invasive procedure for many diseases in the abdomen. Since the laparoscopic camera provides only the surface view of the internal organs, in many procedures, surgeons use laparoscopic ultrasound (LUS) to visualize deep-seated surgical targets. Conventionally, the 2D LUS image is visualized in a display spatially separate from that displays the laparoscopic video. Therefore, reasoning about the geometry of hidden targets requires mentally solving the spatial alignment, and resolving the modality differences, which is cognitively very challenging. Moreover, the mental representation of hidden targets in space acquired through such cognitive medication may be error prone, and cause incorrect actions to be performed. To remedy this, advanced visualization strategies are required where the US information is visualized in the context of the laparoscopic video. To this end, efficient computational methods are required to accurately align the US image coordinate system with that centred in the camera, and to render the registered image information in the context of the camera such that surgeons perceive the geometry of hidden targets accurately. In this thesis, such a visualization pipeline is described. A novel method to register US images with a camera centric coordinate system is detailed with an experimental investigation into its accuracy bounds. An improved method to blend US information with the surface view is also presented with an experimental investigation into the accuracy of perception of the target locations in space

    Virtual reality assisted fluorescence microscopy data visualisation and analysis for improved understanding of molecular structures implicated in neurodegenerative diseases

    Get PDF
    Thesis (PhD)--Stellenbosch University, 2020.ENGLISH ABSTRACT: Confocal microscopy is one of the major imaging tools used in molecular life sciences. It delivers detailed three-dimensional data sets and is instrumental in biological analysis and research where structures of interest are labelled using fluorescent probes. Usually, this three-dimensional data is rendered as a projection onto a two-dimensional display. This can however lead to ambiguity in the visual interpretation of the structures of interest in the sample. Furthermore, analysis and region of interest (ROI) selection are also most commonly performed two-dimensionally. This may inadvertently lead to either the exclusion of relevant or the inclusion of irrelevant data points, consequently affecting the accuracy of the analysis. We present a virtual reality (VR) based system that allows firstly, precision region of interest selection and colocalisation analysis, secondly, the spatial visualisation of the correlation of colocalised fluorescence channels and, thirdly, an analysis tool to automatically determine the localisation and presence of mitochondrial fission, fusion and depolarisation. The VR system allows the three-dimensional reconstructed sample data set to be interrogated and analysed in a highly controlled and precise manner, using either fullyimmersive hand-tracking or a conventional handheld controller. We apply this system to the specific task of colocalisation analysis, an important tool in fluorescence microscopy. We evaluate our system interface by means of a set of user trials and show that, despite inaccuracies associated with the hand tracking, it is the most productive and intuitive interface compared to the handheld controller. Applying the VR system to biological sample analysis, we subsequently calculate several key colocalisation metrics using both two-dimensionally and three-dimensionally derived super-resolved structured illumination-based data sets. Using a neuronal injury model, we investigate the change in colocalisation between two proteins of interest, Tau and acetylated -tubulin, under control conditions as well as after 6 hours and again after 24 hours of neuronal injury. Applying the VR based system, we demonstrate the ability to perform precise ROI selections of 3D structures for subsequent colocalisation analysis. We demonstrate that performing colocalisation analysis in three dimensions enhances its sensitivity, leading to a greater number of statistically significant differences than could be established when using two-dimensionally based methods. Next, we propose a novel biological visual analysis method for the qualitative analysis of colocalisation. This method visualises the spatial distribution of the correlation between the underlying fluorescence channel intensities by using a colourmap. This method is evaluated using both synthetic data and biological fluorescence micrographs, demonstrating enhancement of the visualisation in a robust manner by indicating only truly colocalised regions. Mitochondrial fission, fusion and depolarisation events are important in cellular function and viability. However, the quantitative analysis linked to the localisation of each event in the three-dimensional context has not been accomplished. We extend the VR system to analyse fluorescence-based time-lapse sequences of mitochondria and propose a new method to automatically determine the location and quantity of the mitochondrial events. The detected mitochondrial event locations can then be superimposed on the fluorescence z-stacks. We apply this method both to control samples as well as cells that were treated with hydroxychloroquine sulphate (HCQ) and demonstrate how a subsequent quantitative description of the fission/fusion equilibrium as well as the extent of depolarisation can be determined. We conclude that virtual reality offers an attractive and powerful means to extend fluorescence-based microscopy sample navigation, visualisation and analysis. Three-dimensional VR-assisted ROI selection enable samples to be interrogated and assessed with greater precision, thereby exploiting the potential of fluorescence-based image analysis, such as colocalisation, in biomedical research. The automatic localisation and quantification of mitochondrial events can support research of mitochondrial function in healthy and diseased cells, where quantitative analysis of fission, fusion and depolarisation is of importance.AFRIKAANSE OPSOMMING: Konfokale mikroskopie is een van die belangrikste beeldinstrumente wat in die molekulêre lewenswetenskappe gebruik word. Dit lewer gedetailleerde drie-dimensionele datastelle wat instrumenteel is vir biologiese analise en navorsing waar strukture van belang gemerk word met behulp van fluoresserende probes. Gewoonlik word hierdie drie-dimensionele data as ’n projeksie op ’n twee-dimensionele skerm weergegee. Dit kan egter lei tot dubbelsinnigheid in die visuele interpretasie van die struktuur van belang in die biologiese monster. Verder word analise en streek van belang (SVB) seleksie ook meestal tweedimensioneel uitgevoer. Dit kan onbedoeld lei tot die uitsluiting van relevante of die insluiting van irrelevante datapunte, wat die akkuraatheid van die analise kan beïnvloed. Ons bied ’n virtuele realiteit (VR)-gebaseerde stelsel aan wat eerstens presisie SVB seleksie en kolokaliseringsanalise in staat stel, tweedens, die ruimtelike visualisering weergee van die korrelasie van gekolokaliseerde fluoressensie-kanale, en derdens ’n analise-metode vir die outomaties lokalisering van mitochondriale splitsing, samesmelting en depolarisasie. Die VR-stelsel laat toe dat die drie-dimensionele gerekonstrueerde monsterdatastel op ’n hoogs gekontroleerde en akkurate wyse ondersoek en geanaliseer kan word deur gebruik te maak van ’n volledig immersiewe hand-volg toestel óf ’n konvensionele spelbeheerder. Ons pas hierdie stelsel toe op die spesifieke taak van kolokaliseringsanalise, ’n belangrike instrument in fluoressensie-mikroskopie. Ons evalueer die twee stelsel-koppelvlakke aan die hand van ’n stel gebruikersproewe en wys dat die hand-volg toesetel, ondanks onakkuraathede wat met die dit verband hou, die mees produktiewe en intuïtiewe koppelvlak is in vergelyking met die spelbeheerder. Met die toepassing van die VR-stelsel op biologiese monsteranalise, bereken ons daarna verskeie sleutel kolokalisasiemates met behulp van beide twee-dimensionele en drie-dimensionele “super-resolved structured illumination” gebaseerde datastelle. Met behulp van ’n neuronale beseringsmodel ondersoek ons die verandering in kolokalisasie tussen twee proteïene van belang, Tau en geasetileerde -tubulien, onder beheerstoestande sowel as na 6 uur en weer na 24 uur na neuronale besering. Met gebruik van die VR-gebaseerde stelsel, demonstreer ons die vermoë om presiese SVB-seleksies van 3D-strukture uit te voer vir latere kolokaliseringsanalise. Ons demonstreer dat die uitvoering van kolokaliseringsanalise in drie dimensies die sensitiwiteit daarvan verhoog, wat lei tot ’n groter aantal statisties beduidende verskille as wat vasgestel kan word by die gebruik van twee-dimensionele metodes. Vervolgens stel ons ’n nuwe biologiese visuele analise-metode voor vir die kwalitatiewe analise van kolokalisering. Hierdie metode visualiseer die ruimtelike verdeling van die korrelasie tussen die onderliggende fluoressensie-kanaal-intensiteite met behulp van ’n kleurkaart. Hierdie metode word geëvalueer deur gebruik te maak van sintetiese data sowel as biologiese fluoressensiemikrograwe, wat die verbetering van die visualisering op ’n robuuste manier demonstreer deur slegs waarlik gekolokaliseerde streke aan te dui. Mitochondriale splitsing, samesmelting en depolarisasie gebeurtenisse is belangrik vir sellulêre funksie en lewensvatbaarheid. Die kwantitatiewe ontleding gekoppel aan die lokalisering van elke gebeurtenis in die drie-dimensionele konteks is egter nog nie gedoen nie. Ons brei die VR-stelsel uit om fluoressensie-gebaseerde tydsverloop-sekwensies van mitochondria te ontleed en stel ’n nuwe metode voor om outomaties die ligging en hoeveelheid van die mitochondriale gebeure te bepaal. Die waargenome liggings van mitochondriale gebeurtenisse kan dan op die fluoressensie-z-stapels aangebring word. Ons pas hierdie metode toe op beide kontrole monsters sowel as selle wat met hydroxychloroquine sulfaat (HCQ) behandel is en demonstreer hoe ’n daaropvolgende kwantitatiewe beskrywing van die splitsing/samesmelting-ewewig sowel as die omvang van depolarisasie bepaal kan word. Ons kom tot die gevolgtrekking dat virtuele werklikheid ’n aantreklike en kragtige manier bied om fluoressensie-gebaseerde mikroskopie-monsternavigasie, -visualisering en -analise uit te voer. Drie-dimensionele VR-ondersteunde SVB-seleksie laat toe dat monsters met groter noukeurigheid ondersoek en beoordeel kan word en sodoende die potensiaal van fluoressensie-gebaseerde beeldanalise, soos kolokalisering, in biomediese navorsing te benut. Die outomatiese lokalisering en kwantifisering van mitochondriale gebeure kan die navorsing van mitochondriale funksie in gesonde en siek selle ondersteun, waar kwantitatiewe analise van splitsing, samesmelting en depolarisasie van belang is.Doctora

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future
    corecore