485 research outputs found

    Rendering techniques for multimodal data

    Get PDF
    Many different direct volume rendering methods have been developed to visualize 3D scalar fields on uniform rectilinear grids. However, little work has been done on rendering simultaneously various properties of the same 3D region measured with different registration devices or at different instants of time. The demand for this type of visualization is rapidly increasing in scientific applications such as medicine in which the visual integration of multiple modalities allows a better comprehension of the anatomy and a perception of its relationships with activity. This paper presents different strategies of Direct Multimodal Volume Rendering (DMVR). It is restricted to voxel models with a known 3D rigid alignment transformation. The paper evaluates at which steps of the render-ing pipeline must the data fusion be realized in order to accomplish the desired visual integration and to provide fast re-renders when some fusion parameters are modified. In addition, it analyzes how existing monomodal visualization al-gorithms can be extended to multiple datasets and it compares their efficiency and their computational cost.Postprint (published version

    Interactive visualization tool for multi-channel confocal microscopy data in neurobiology research

    Get PDF
    Journal ArticleConfocal microscopy is widely used in neurobiology for studying the three-dimensional structure of the nervous system. Confocal image data are often multi-channel, with each channel resulting from a different fluorescent dye or fluorescent protein; one channel may have dense data, while another has sparse; and there are often structures at several spatial scales: subneuronal domains, neurons, and large groups of neurons (brain regions). Even qualitative analysis can therefore require visualization using techniques and parameters fine-tuned to a particular dataset. Despite the plethora of volume rendering techniques that have been available for many years, the techniques standardly used in neurobiological research are somewhat rudimentary, such as looking at image slices or maximal intensity projections. Thus there is a real demand from neurobiologists, and biologists in general, for a flexible visualization tool that allows interactive visualization of multi-channel confocal data, with rapid fine-tuning of parameters to reveal the three dimensional relationships of structures of interest. Together with neurobiologists, we have designed such a tool, choosing visualization methods to suit the characteristics of confocal data and a typical biologist's workflow. We use interactive volume rendering with intuitive settings for multidimensional transfer functions, multiple render modes and multi-views for multi-channel volume data, and embedding of polygon data into volume data for rendering and editing. As an example, we apply this tool to visualize confocal microscopy datasets of the developing zebrafish visual system

    Isosurface modelling of soft objects in computer graphics.

    Get PDF
    There are many different modelling techniques used in computer graphics to describe a wide range of objects and phenomena. In this thesis, details of research into the isosurface modelling technique are presented. The isosurface technique is used in conjunction with more traditional modelling techniques to describe the objects needed in the different scenes of an animation. The isosurface modelling technique allows the description and animation of objects that would be extremely difficult, or impossible to describe using other methods. The objects suitable for description using isosurface modelling are soft objects. Soft objects merge elegantly with each other, pull apart, bubble, ripple and exhibit a variety of other effects. The representation was studied in three phases of a computer animation project: modelling of the objects; animation of the objects; and the production of the images. The research clarifies and presents many algorithms needed to implement the isosurface representation in an animation system. The creation of a hierarchical computer graphics animation system implementing the isosurface representation is described. The scalar fields defining the isosurfaces are represented using a scalar field description language, created as part of this research, which is automatically generated from the hierarchical description of the scene. This language has many techniques for combining and building the scalar field from a variety of components. Surface attributes of the objects are specified within the graphics system. Techniques are described which allow the handling of these attributes along with the scalar field calculation. Many animation techniques specific to the isosurface representation are presented. By the conclusion of the research, a graphics system was created which elegantly handles the isosurface representation in a wide variety of animation situations. This thesis establishes that isosurface modelling of soft objects is a powerful and useful technique which has wide application in the computer graphics community

    The Hybrid Octree: towards the definition of a multiresolution hybrid framework

    No full text
    The Hybrid Octree (HO) is an octree-based representation scheme for coding in a single model an exact representation of a surface and volume data. The HO is able to efficiently manipulate surface and volume data independently. Moreover, it facilitates the visualization and composition of surface and volume data using graphic hardware. The HO definition and its construction algorithm are provided. Some examples are presented and the goodness of the model is discussed.Postprint (published version

    The Water Cycle at the Phoenix Landing Site, Mars

    Get PDF
    The water cycle is critically important to understanding Mars system science, especially interactions between water and surface minerals or possible biological systems. In this thesis, the water cycle is examined at the Mars Phoenix landing site: 68.2N, 125.70W), using data from the Compact Reconnaissance Imaging Spectrometer for Mars: CRISM), High-Resolution Imaging Science Experiment: HiRISE), the Phoenix Lander Surface Stereo Imager: SSI), and employing non-linear spectral mixing models. The landing site is covered for part of the year by the seasonal ice cap, a layer of CO2 and H2O ice that is deposited in mid-fall and sublimates in mid-spring. During the mid-summer, H2O ice is deposited on the surface at the Phoenix landing site. CO2 ice forms at the site during fall. The onset date of seasonal ices varies annually, perhaps due to variable levels of atmospheric dust. During fall and winter, the CO2 ice layer thickens and sinters into a slab of ice, ~30 cm thick. After the spring equinox, the CO2 slab breaks into smaller grains as it sublimates. Long before all of the CO2 ice is gone, H2O ice dominates the near-infrared spectra of the surface. Additional H2O ice is cold-trapped onto the surface of the CO2 ice deposit during this time. Sublimation during the spring is not uniform, and depends on the thermal inertia properties of the surface, including depth of ground ice. All of the seasonal ices have sublimated by mid-spring; however, a few permanent ice deposits remain throughout the summer. These are small water ice deposits on the north-facing slopes of Heimdal Crater and adjacent plateaus, and a small patch of mobile water ices that chases shadows in a small crater near the landing site. During the late spring and early summer, the site is free of surface ice. During this time, the water cycle is dominated by vapor exchange between the subsurface water ice deposits and the atmosphere. Two types of subsurface ice were found at the Phoenix landing site: a pore water ice that appears to be in diffusive equilibrium with the atmosphere, and an almost pure water ice deposit that is apparently not in equilibrium. In addition to vapor and solid phases of the water cycle, there is strong evidence of a liquid phase. Patches of concentrated perchlorate salt are observed in trenches dug by the lander. Perchlorate is believed to form at the landing site through atmospheric interactions, which deposit the salts on the surface. The salts are then dissolved and translocated to the subsurface by thin films of liquid water. These thin films may arise due to perchlorate interactions with the atmospheric water vapor or seasonal ices. It is possible that the winter CO2 ice slab may act as a greenhouse cap, trapping enough heat for the underlying fall-deposited water ice to react with the perchlorate to form thin films of brines. Alternatively, the brines may form when summertime water vapor interacts with perchlorate on the surface, when temperatures rise above the perchlorate brine eutectic

    Landscape evolution and preservation of ice over one million years old quantified with cosmogenic nuclides 26Al, 10Be, and 21Ne, Ong Valley, Antarctica

    Get PDF
    Antarctica has been glaciated for the past 35-40 million years (Denton et al., 1991) and evidence of periodic fluctuations of the Antarctic ice sheet (AIS) during the Cenozoic are recorded in the ice sheet itself, deep sea sediments, and glacial deposits on the continent (Ingólfsson, 2004). Quaternary continental records of AIS extent is limited to few locations along the Transantarctic Mountains (TAM) and coastal continental boundaries (Denton et al., 1984; Denton et al., 1989). Records of atmospheric variation over time, glacial extents, and ice sheet responses to environmental changes are required to understand modern day forces on climate and the environment and provide a context in which to relate modern observations to the past. In this framework, this paper evaluates the geomorphic stability of Ong Valley within the Central Transantarctic Mountains (CTM) and the preservation of Pleistocene aged ice underneath an insulating lag deposit. Ong Valley in the Central Transantarctic Mountains (CTM) contains ancient buried glacier ice derived from past flow of the adjacent Argosy Glacier. The valley floor is covered with patterned ground and has three distinct glacial tills. Geomorphic and stratigraphic evidence shows that these deposits originate from sublimation of debris-laden glacier ice. Buried glacier ice is still present beneath the youngest two drifts, one of which is older than one million years. The tills above the ice record the repeated advances and stagnations of the Argosy Glacier. Cosmogenic exposure age dating of these tills provides ages, ice sublimation rates and regolith erosion rates that support the antiquity of the ice below. The oldest ice on Earth has an undisputed age of 800 Kya and is at the bottom of large ice sheets (Fischer et al., 2013). Access to it requires extensive drilling through kilometers of ice. Conversely, the ice in Ong valley is preserved beneath only 1 m of till and is over 1 million years old. Geomorphically similar ice was found in Beacon Valley, but its age (~8.1 Mya) was inferred from dating of volcanic ash above the ice (Sugden et al., 1995). Subsequent analysis from other investigators suggest that the age of the ash may not be a good indicator for the age of the ice below it (Hindmarsh et al., 1998; Ng et al., 2005; Stone et al., 2000). Concentrations of cosmogenic 10Be, 26Al, and 21Ne were measured in regolith samples collected every ~10 cm in 1 m deep vertical transects through three tills in Ong Valley. Transects reach the buried ice surface in the two younger tills. Cosmogenic-nuclide concentrations in these transects are functions of: i) the age of the till and ice below it; ii) the rate of formation of the till by sublimation of underlying ice; and iii) the rate of surface erosion of the till. In general, a young till unit will have 26Al and 10Be concentrations that are primarily a function of till age; however, over time, 26Al and 10Be concentrations reach equilibrium with erosion, sublimation rates, and radioactive decay; thus, 26Al and 10Be concentrations in older tills primarily provide information about those rates. Conversely, stable nuclide 21Ne only accumulates over time which makes it useful in determining the age of older tills where as 26Al and 10Be provide minimum limiting ages. 26Al, 10Be, and 21Ne measurements in Ong Valley are consistent with a scenario in which tills are derived from progressive sublimation of glacial ice containing 10% by volume englacial debris. 26Al and 10Be concentrations in the youngest till constrain its emplacement age at 18.4 Kya. 21Ne nuclide concentrations in the two oldest tills are best explained by ice sublimation rates on the order of tens of m/Mya and surface erosion rates of the till on the order of m/Mya for at least 0.9 Mya to 1.5 Mya. Concentrations of nuclides in the bottom of the second drift suggest that local sublimation rates have increased slightly in the past 40-150 Kya. These observations imply that the ice below the middle drift is the oldest undisturbed glacier ice currently known on Earth and should provide ancient atmospheric records within one meter of Earth’s surface

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future

    Time-varying volume visualization

    Get PDF
    Volume rendering is a very active research field in Computer Graphics because of its wide range of applications in various sciences, from medicine to flow mechanics. In this report, we survey a state-of-the-art on time-varying volume rendering. We state several basic concepts and then we establish several criteria to classify the studied works: IVR versus DVR, 4D versus 3D+time, compression techniques, involved architectures, use of parallelism and image-space versus object-space coherence. We also address other related problems as transfer functions and 2D cross-sections computation of time-varying volume data. All the papers reviewed are classified into several tables based on the mentioned classification and, finally, several conclusions are presented.Preprin
    corecore