486 research outputs found

    A Data-Virtualization System for Large Model Visualization

    Get PDF
    Interactive scientific visualizations are widely used for the visual exploration and examination of physical data resulting from measurements or simulations. Driven by technical advancements of data acquisition and simulation technologies, especially in the geo-scientific domain, large amounts of highly detailed subsurface data are generated. The oil and gas industry is particularly pushing such developments as hydrocarbon reservoirs are increasingly difficult to discover and exploit. Suitable visualization techniques are vital for the discovery of the reservoirs as well as their development and production. However, the ever-growing scale and complexity of geo-scientific data sets result in an expanding disparity between the size of the data and the capabilities of current computer systems with regard to limited memory and computing resources. In this thesis we present a unified out-of-core data-virtualization system supporting geo-scientific data sets consisting of multiple large seismic volumes and height-field surfaces, wherein each data set may exceed the size of the graphics memory or possibly even the main memory. Current data sets fall within the range of hundreds of gigabytes up to terabytes in size. Through the mutual utilization of memory and bandwidth resources by multiple data sets, our data-management system is able to share and balance limited system resources among different data sets. We employ multi-resolution methods based on hierarchical octree and quadtree data structures to generate level-of-detail working sets of the data stored in main memory and graphics memory for rendering. The working set generation in our system is based on a common feedback mechanism with inherent support for translucent geometric and volumetric data sets. This feedback mechanism collects information about required levels of detail during the rendering process and is capable of directly resolving data visibility without the application of any costly occlusion culling approaches. A central goal of the proposed out-of-core data management system is an effective virtualization of large data sets. Through an abstraction of the level-of-detail working sets, our system allows developers to work with extremely large data sets independent of their complex internal data representations and physical memory layouts. Based on this out-of-core data virtualization infrastructure, we present distinct rendering approaches for specific visualization problems of large geo-scientific data sets. We demonstrate the application of our data virtualization system and show how multi-resolution data can be treated exactly the same way as regular data sets during the rendering process. An efficient volume ray casting system is presented for the rendering of multiple arbitrarily overlapping multi-resolution volume data sets. Binary space-partitioning volume decomposition of the bounding boxes of the cube-shaped volumes is used to identify the overlapping and non-overlapping volume regions in order to optimize the rendering process. We further propose a ray casting-based rendering system for the visualization of geological subsurface models consisting of multiple very detailed height fields. The rendering of an entire stack of height-field surfaces is accomplished in a single rendering pass using a two-level acceleration structure, which combines a minimum-maximum quadtree for empty-space skipping and sorted lists of depth intervals to restrict ray intersection searches to relevant height fields and depth ranges. Ultimately, we present a unified rendering system for the visualization of entire geological models consisting of highly detailed stacked horizon surfaces and massive volume data. We demonstrate a single-pass ray casting approach facilitating correct visual interaction between distinct translucent model components, while increasing the rendering efficiency by reducing processing overhead of potentially invisible parts of the model. The combination of image-order rendering approaches and the level-of-detail feedback mechanism used by our out-of-core data-management system inherently accounts for occlusions of different data types without the application of costly culling techniques. The unified out-of-core data-management and virtualization infrastructure considerably facilitates the implementation of complex visualization systems. We demonstrate its applicability for the visualization of large geo-scientific data sets using output-sensitive rendering techniques. As a result, the magnitude and multitude of data sets that can be interactively visualized is significantly increased compared to existing approaches

    QuadStack: An Efficient Representation and Direct Rendering of Layered Datasets

    Get PDF
    We introduce QuadStack, a novel algorithm for volumetric data compression and direct rendering. Our algorithm exploits the data redundancy often found in layered datasets which are common in science and engineering fields such as geology, biology, mechanical engineering, medicine, etc. QuadStack first compresses the volumetric data into vertical stacks which are then compressed into a quadtree that identifies and represents the layered structures at the internal nodes. The associated data (color, material, density, etc.) and shape of these layer structures are decoupled and encoded independently, leading to high compression rates (4× to 54× of the original voxel model memory footprint in our experiments). We also introduce an algorithm for value retrieving from the QuadStack representation and we show that the access has logarithmic complexity. Because of the fast access, QuadStack is suitable for efficient data representation and direct rendering. We show that our GPU implementation performs comparably in speed with the state-of-the-art algorithms (18-79 MRays/s in our implementation), while maintaining a significantly smaller memory footprint

    Ubiquitous volume rendering in the web platform

    Get PDF
    176 p.The main thesis hypothesis is that ubiquitous volume rendering can be achieved using WebGL. The thesis enumerates the challenges that should be met to achieve that goal. The results allow web content developers the integration of interactive volume rendering within standard HTML5 web pages. Content developers only need to declare the X3D nodes that provide the rendering characteristics they desire. In contrast to the systems that provide specific GPU programs, the presented architecture creates automatically the GPU code required by the WebGL graphics pipeline. This code is generated directly from the X3D nodes declared in the virtual scene. Therefore, content developers do not need to know about the GPU.The thesis extends previous research on web compatible volume data structures for WebGL, ray-casting hybrid surface and volumetric rendering, progressive volume rendering and some specific problems related to the visualization of medical datasets. Finally, the thesis contributes to the X3D standard with some proposals to extend and improve the volume rendering component. The proposals are in an advance stage towards their acceptance by the Web3D Consortium

    Ubiquitous volume rendering in the web platform

    Get PDF
    176 p.The main thesis hypothesis is that ubiquitous volume rendering can be achieved using WebGL. The thesis enumerates the challenges that should be met to achieve that goal. The results allow web content developers the integration of interactive volume rendering within standard HTML5 web pages. Content developers only need to declare the X3D nodes that provide the rendering characteristics they desire. In contrast to the systems that provide specific GPU programs, the presented architecture creates automatically the GPU code required by the WebGL graphics pipeline. This code is generated directly from the X3D nodes declared in the virtual scene. Therefore, content developers do not need to know about the GPU.The thesis extends previous research on web compatible volume data structures for WebGL, ray-casting hybrid surface and volumetric rendering, progressive volume rendering and some specific problems related to the visualization of medical datasets. Finally, the thesis contributes to the X3D standard with some proposals to extend and improve the volume rendering component. The proposals are in an advance stage towards their acceptance by the Web3D Consortium

    Automated Digital Machining for Parallel Processors

    Get PDF
    When a process engineer creates a tool path a number of fixed decisions are made that inevitably produce sub-optimal results. This is because it is impossible to process all of the tradeoffs before generating the tool path. The research presents a methodology to support a process engineers attempt to generate optimal tool paths by performing automated digital machining and analysis. This methodology automatically generates and evaluates tool paths based on parallel processing of digital part models and generalized cutting geometry. Digital part models are created by voxelizing STL files and the resulting digital part surfaces are obtained based on casting rays into the part model. Tool paths are generated based on a general path template and updated based on generalized tool geometry and part surface information. The material removed by the generalized cutter as it follows the path is used to obtain path metrics. The paths are evaluated based on the path metrics of material removal rate, machining time, and amount of scallop. This methodology is a parallel processing accelerated framework suitable for generating tool paths in parallel enabling the process engineer to rank and select the best tool path for the job

    Interactive high fidelity visualization of complex materials on the GPU

    Get PDF
    Documento submetido para revisão pelos pares. A publicar em Computers & Graphics. ISSN 0097-8493. 37:7 (nov. 2013) p. 809–819High fidelity interactive rendering is of major importance for footwear designers, since it allows experimenting with virtual prototypes of new products, rather than producing expensive physical mock-ups. This requires capturing the appearance of complex materials by resorting to image based approaches, such as the Bidirectional Texture Function (BTF), to allow subsequent interactive visualization, while still maintaining the capability to edit the materials' appearance. However, interactive global illumination rendering of compressed editable BTFs with ordinary computing resources remains to be demonstrated. In this paper we demonstrate interactive global illumination by using a GPU ray tracing engine and the Sparse Parametric Mixture Model representation of BTFs, which is particularly well suited for BTF editing. We propose a rendering pipeline and data layout which allow for interactive frame rates and provide a scalability analysis with respect to the scene's complexity. We also include soft shadows from area light sources and approximate global illumination with ambient occlusion by resorting to progressive refinement, which quickly converges to an high quality image while maintaining interactive frame rates by limiting the number of rays shot per frame. Acceptable performance is also demonstrated under dynamic settings, including camera movements, changing lighting conditions and dynamic geometry.Work partially funded by QREN project nbr. 13114 TOPICShoe and by National Funds through the FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within projectPEst-OE/EEI/UI0752/2011

    Smart Surgical Microscope based on Optical Coherence Domain Reflectometry

    Get PDF
    Department of Biomedical EngineeringOver the several decades, there have been clinical needs that requires advanced technologies in medicine. Optical coherence tomography (OCT), one of the newly emerged medical imaging devices, provides non-invasive cross-sectional images in high resolution which is mainly used in ophthalmology. However, due to the limited penetration depth of 1-2 mm in bio-samples, there is a limit to be widely used. In order to easily integrate with existing medical tools and be convenient to users, it is necessary that the sample unit of OCT should be compact and simple. In this study, we developed high-speed swept-source OCT (SS-OCT) for advanced screening of otolaryngology. Synchronized signal sampling with a high-speed digitizer using a clock signal from a swept laser source, its trigger signal is also used to synchronize with the movement of the scanning mirror. The SS-OCT system can reliably provide high-throughput images, and two-axis scanning of galvano mirrors enables real-time acquisition of 3D data. Graphic processing unit (GPU) can performs high-speed data processing through parallel programming, and can also implement perspective projection 3D OCT visualization with optimal ray casting techniques. In the Clinical Study of Otolaryngology, OCT was applied to identify the microscopic extrathyroidal extension (mETE) of papillary thyroid cancer (PTC). As a result to detect the mETE of around 60% in conventional ultrasonography, it could be improved to 84.1% accuracy in our study. The detection ratio of the mETE was calculated by the pathologist analyzing the histologic image. In chapter 3, we present a novel study using combined OCT system integrated with a conventional surgical microscope. In the current set-up of surgical microscope, only two-dimensional microscopic images through the eyepiece view are provided to the surgeon. Thus, image-guided surgery, which provides real-time image information of the tissues or the organs, has been developed as an advanced surgical technique. This study illustrate newly designed optical set-up of smart surgical microscope that combined sample arm of the OCT with an existing microscope. Specifically, we used a beam projector to overlay OCT images on existing eyepiece views, and demonstrated augmented reality images. In chapter 4, in order to develop novel microsurgical instruments, optical coherence domain reflectometry (OCDR) was applied. Introduces smart surgical forceps using OCDR as a sensor that provides high-speed, high-resolution distance information in the tissue. To attach the sensor to the forceps, the lensed fiber which is a small and high sensitivity sensor was fabricated and the results are shown to be less affected by the tilt angle. In addition, the piezo actuator compensates the hand tremor, resulting in a reduction in the human hand tremor of 5 to 15 Hz. Finally, M-mode OCT needle is proposed for microsurgery guidance in ophthalmic surgery. Stepwise transitional core (STC) fiber was applied as a sensor to measure information within the tissue and attached to a 26 gauge needle. It shows the modified OCT system and the position-guided needle design of the sample stage and shows the algorithm flowchart of M-mode OCT imaging software. The developed M-mode OCT needle has been applied to animal studies using rabbit eyes and demonstrates the big-bubble deep anterior lamellar keratoplasty (DALK) surgery for corneal transplantation. Through this study, we propose a novel microsurgical instrument for lamellar keratoplasty and evaluate its feasibility with conventional regular OCT system images. In conclusion, for fundamental study required new augmented reality guided surgery with smart surgical microscope, it is expected that OCT combined with surgical microscope can be widely used. We demonstrated a novel microsurgical instrument to share with light source and the various optical components. Acquired information throughout our integrated system would be a key method to meet a wide range of different clinical needs in the real world.ope

    Efficient Ray Tracing For Mobile Devices

    Get PDF
    The demand for mobile devices with higher graphics performance has increased substantially in the past few years. Most mobile applications that demand 3D graphics use commonly available frameworks, such as Unity and Unreal Engine. In mobile devices, these frameworks are built on top of OpenGL ES and use a graphics technique called rasterization, a simple concept that yields good performance without sacrificing graphic quality. However, rasterization cannot easily handle some physical phenomena of light (i.e. reflection and refraction). In order to support such effects, the graphics framework has to emulate them, thereby leading to suboptimal results in term of quality. Other techniques, such as ray tracing, do not require such emulation to be implemented, as the aforementioned phenomena of light are inherently considered. In this work, we first design and implement a 3D renderer that uses ray tracing to generate high quality graphics for mobile devices. Our implementation yielded high quality results but at a high computational cost, which impacted performance. To alleviate this problem, we then developed special algorithms and data structures to substantially improve the performance of the rendering engine. In our experiments, we achieved frame rates that were 7 to 15 times faster than the brute force approach. Being able to render high quality graphics with good performance can potentially revolutionize the mobile gaming industry. To the best of our knowledge, this has never before been implemented in commonly available devices, such as smartphones and tablets

    Highly Parallel Geometric Characterization and Visualization of Volumetric Data Sets

    Get PDF
    Volumetric 3D data sets are being generated in many different application areas. Some examples are CAT scans and MRI data, 3D models of protein molecules represented by implicit surfaces, multi-dimensional numeric simulations of plasma turbulence, and stacks of confocal microscopy images of cells. The size of these data sets has been increasing, requiring the speed of analysis and visualization techniques to also increase to keep up. Recent advances in processor technology have stopped increasing clock speed and instead begun increasing parallelism, resulting in multi-core CPUS and many-core GPUs. To take advantage of these new parallel architectures, algorithms must be explicitly written to exploit parallelism. In this thesis we describe several algorithms and techniques for volumetric data set analysis and visualization that are amenable to these modern parallel architectures. We first discuss modeling volumetric data with Gaussian Radial Basis Functions (RBFs). RBF representation of a data set has several advantages, including lossy compression, analytic differentiability, and analytic application of Gaussian blur. We also describe a parallel volume rendering algorithm that can create images of the data directly from the RBF representation. Next we discuss a parallel, stochastic algorithm for measuring the surface area of volumetric representations of molecules. The algorithm is suitable for implementation on a GPU and is also progressive, allowing it to return a rough answer almost immediately and refine the answer over time to the desired level of accuracy. After this we discuss the concept of Confluent Visualization, which allows the visualization of the interaction between a pair of volumetric data sets. The interaction is visualized through volume rendering, which is well suited to implementation on parallel architectures. Finally we discuss a parallel, stochastic algorithm for classifying stem cells as having been grown on a surface that induces differentiation or on a surface that does not induce differentiation. The algorithm takes as input 3D volumetric models of the cells generated from confocal microscopy. This algorithm builds on our algorithm for surface area measurement and, like that algorithm, this algorithm is also suitable for implementation on a GPU and is progressive

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future
    corecore