121,226 research outputs found

    Internet-based Medical Data Rendering and Image Enhancement Using Webgl and Apache Server

    Get PDF
    Internet-based medical data visualization has wide applications in distributed medical collaborations and treatment. It can be achieved through volume rendering technique, which is a key method for medical image exploration and has been applied to the clinical medical fields such as disease diagnosis and image-guided interaction.In this project, we implement some medical data processing and optical mapping methods for web-based medical data visualization and image enhancement. The Web Graphics Library (WebGL) is used with JavaScript for rendering 3D graphics in a web browser. WebGL supports GPU based volume rendering which is an efficient tool for visual analysis of medical data, which involves vertex shaders and fragment shaders. The vertex shader provides space coordinates, and the fragment shader provides color.Network-based volume rendering is used to visualize data in a 3D form. An image processing method is implemented to transfer the 3D dataset into multiple slices of 2D image data and WebGL is employed to render 3D medical data in web browsers. Volume rendering is accomplished using the volume ray casting algorithm implemented with WebGL2. We collect new medical data and process them to fit the web-based rendering environment. The submitted work will explain the process of preparing and loading medical data suitable to be rendered. All the visualized data can be enhanced with the developed methods to emphasize the image feature of interest. We also add new control points for optical mapping and rendering medical data in a web browser in real-time. The software platform is running on Apache Web Server for network-based data visualization. The developed image enhancements and property control methods can improve medical data visualization on web browsers, which will be helpful for internet-based medical data analysis and exploration, as well as medical diagnosis and treatment.https://ir.library.illinoisstate.edu/ursit/1000/thumbnail.jp

    Hybrid shear-warp rendering

    Get PDF
    Shear-warp rendering is a fast and efficient method for visualizing a volume of sampled data based on a factorization of the viewing transformation into a shear and a warp. In shear-warp rendering, the volume is resampled, composited and warped to obtain the final image. Many applications, however, require a mixture of polygonal and volumetric data to be rendered together in a single image. This paper describes a new approach for extending the shear-warp rendering to simultaneously handle polygonal objects. A data structure, the zlist-buffe, is presented. It is basically a multilayered z-buffer. With the zlist-buffer, an object-based scan conversion of polygons requires only a simple modification of the standard polygon scan-conversion algorithm. This paper shows how the scan conversion can be integrated with shear-warp rendering of run-length encoded volume data to obtain quality images in real time. The utility and performance of the approach using a number of test renderings is also discussed

    Gigavoxels: ray-guided streaming for efficient and detailed voxel rendering

    Get PDF
    Figure 1: Images show volume data that consist of billions of voxels rendered with our dynamic sparse octree approach. Our algorithm achieves real-time to interactive rates on volumes exceeding the GPU memory capacities by far, tanks to an efficient streaming based on a ray-casting solution. Basically, the volume is only used at the resolution that is needed to produce the final image. Besides the gain in memory and speed, our rendering is inherently anti-aliased. We propose a new approach to efficiently render large volumetric data sets. The system achieves interactive to real-time rendering performance for several billion voxels. Our solution is based on an adaptive data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. One key element of our method is to guide data production and streaming directly based on information extracted during rendering. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. We also introduce a mipmapping-like process that allows for an increased display rate and better quality through high quality filtering. To further enrich the data set, we create additional details through a variety of procedural methods. We demonstrate our approach in several scenarios, like the exploration of a 3D scan (8192 3 resolution), of hypertextured meshes (16384 3 virtual resolution), or of a fractal (theoretically infinite resolution). All examples are rendered on current generation hardware at 20-90 fps and respect the limited GPU memory budget. This is the author’s version of the paper. The ultimate version has been published in the I3D 2009 conference proceedings.

    3-D image segmentation and rendering

    Get PDF
    Finding methods for detecting objects in computer tomography images has been an active area of research in the medical and industrial imaging communities. While the raw image can be readily displayed as 2-D slices, 3-D analysis and visualization require explicitly defined object boundaries when creating 3-D models. A basic task in 3-D image processing is the segmentation of an image that classifies voxels/pixels into objects or groups. It is very computation intensive for processing because of the huge volume of data. The objective of this research is to find an efficient way to identify, isolate and enumerate 3-D objects in a given data set consisting of tomographic cross-sections of a device under test. In this research, an approach to 3-D image segmentation and rendering of CT data has been developed. Objects are first segmented from the background and then segmented between each other before 3-D rendering. During the first step of segmentation, current techniques of thresholding and image morphology provide a fast way to accomplish the work. During the second step of segmentation, a new method based on the watershed transform has been developed to deal with objects with deep connections. The new method takes advantage of the similarity between consecutive cross section images. The projections of the objects in the first image are taken as catchment basins for the second image. Only the different pixels in the second image are processed during segmentation. This not only saves time to find catchment basins, but also splits objects with deep connections that cannot be simply implemented by the watershed transform. A unique label has been issued to each object after segmentation. Objects can be distinguished well from each 2-D slice by their labels. This is a good preparation for 3-D rendering and quantitative analysis of each object. In this thesis, a novel 3-D rendering has been developed by surface rendering approach. A new and easier rendering model has been invented under the assumptions that light comes from the same side as the viewer, both of which are situated at infinity. It works fast because only surface pixels are being processed and interior pixels are left unprocessed. The surface intensity of the objects is attenuated by coefficients according to their distance from the viewer. The objects finally are shown from top and side views. Volume rendering was accomplished by sample images as well. In this research, the new method works several times faster than previous methods. After successful segmentation and rendering, the volume of each object can be easily calculated and the objects are recognizable in 3-D visualization. Keywords: 3-D Image Segmentation, 3-D Image Rendering, Watershed Transform, Surface Rendering, Thresholding, Morphological Transform

    Distributed Shared Memory for Roaming Large Volumes

    Get PDF
    We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming

    JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases

    Get PDF
    BACKGROUND: Many three-dimensional (3D) images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. RESULTS: We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. CONCLUSION: We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily

    COTS Cluster-based Sort-last Rendering: Performance Evaluation and Pipelined Implementation

    Get PDF
    Sort-last parallel rendering is an efficient technique to visualize huge datasets on COTS clusters. The dataset is subdivided and distributed across the cluster nodes. For every frame, each node renders a full resolution image of its data using its local GPU, and the images are composited together using a parallel image compositing algorithm. In this paper, we present a performance evaluation of standard sort-last parallel rendering methods and of the different improvements proposed in the literature. This evaluation is based on a detailed analysis of the different hardware and software components. We present a new implementation of sort-last rendering that fully overlaps CPU(s), GPU and network usage all along the algorithm. We present experiments on a 3 years old 32-node PC cluster and on a 1.5 years old 5-node PC cluster, both with Gigabit interconnect, showing volume rendering at respectively 13 and 31 frames per second and polygon rendering at respectively 8 and 17 frames per second on a 1024×768 render area, and we show that our implementation outperforms or equals many other implementations and specialized visualization clusters
    corecore