229,271 research outputs found

    Volumetric 3D Display System with Static Screen

    Get PDF
    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous approaches, so there is no image jitter, and has an inherent parallel mechanism for 3D voxel addressing. High spatial resolution is possible with a full color display being easy to implement. The system is low-cost and low-maintenance

    Parallelized Ray Casting Volume Rendering and 3D Segmentation with Combinatorial Map

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Rapid development of digital technology has enabled the real-time volume rendering of scientific data, in particular large microscopy data sets. In general, volume rendering techniques project 3D discrete datasets onto 2D image planes, with the generated views being transparent and having designated color that is not necessarily "real" color. Volume rendering techniques initially require designating a processing method that assigns different colors and transparency coefficients to different regions. Then based on the "viewer" and the dataset "location," the method will determine the final imaging effect. Current popular techniques include ray casting, splatting, shear warp, and texture-based volume rendering. Of particular interest is ray casting as it permits the display of objects interior to a dataset as well as render complex objects such as skeleton and muscle. However, ray casting requires large memory and suffers from longer processing time. One way to address this is to parallelize its implementation on programmable graphic processing hardware. This thesis proposes a GPU based ray casting algorithm that can render a 3D volume in real-time application. In addition, to implementing volume rendering techniques on programmable graphic processing hardware to decrease execution times, 3D image segmentation techniques can also be utilized to increase execution speeds. In 3D image segmentation, the dataset is partitioned into smaller sized regions based on specific properties. By using a 3D segmentation method in volume rendering applications, users can extract individual objects from within the 3D dataset for rendering and further analysis. This thesis proposes a 3D segmentation algorithm with combinatorial map that can be parallelized on graphic processing units

    Recent results in rendering massive models on horizontal parallax-only light field displays

    Get PDF
    In this contribution, we report on specialized out-of-core multiresolution real-time rendering systems able to render massive surface and volume models on a special class of horizontal parallax-only light field displays. The displays are based on a specially arranged array of projectors emitting light beams onto a holographic screen, which then makes the necessary optical transformation to compose these beams into a continuous 3D view. The rendering methods employ state-of-the-art out-of-core multiresolution techniques able to correctly project geometries onto the display and to dynamically adapt model resolution by taking into account the particular spatial accuracy characteristics of the display. The programmability of latest generation graphics architectures is exploited to achieve interactive performance. As a result, multiple freely moving naked-eye viewers can inspect and manipulate virtual 3D objects that appear to them floating at fixed physical locations. The approach provides rapid visual understanding of complex multi-gigabyte surface models and volumetric data sets.304-30

    A volumetric display for visual, tactile and audio presentation using acoustic trapping

    Get PDF
    Science-fiction movies such as Star Wars portray volumetric systems that not only provide visual but also tactile and audible 3D content. Displays, based on swept volume surfaces, holography, optophoretics, plasmonics, or lenticular lenslets, can create 3D visual content without the need for glasses or additional instrumentation. However, they are slow, have limited persistence of vision (POV) capabilities, and, most critically, rely on operating principles that cannot also produce tactile and auditive content. Here, we present for the first time a Multimodal Acoustic Trap Display (MATD): a mid-air volumetric display that can simultaneously deliver visual, auditory, and tactile content, using acoustophoresis as the single operating principle. Our system acoustically traps a particle and illuminates it with red, green, and blue light to control its colour as it quickly scans through our display volume. Using time multiplexing with a secondary trap, amplitude modulation and phase minimization, the MATD delivers simultaneous auditive and tactile content. The system demonstrates particle speeds of up to 8.75m/s and 3.75m/s in the vertical and horizontal directions respectively, offering particle manipulation capabilities superior to other optical or acoustic approaches demonstrated to date. Beyond enabling simultaneous visual, tactile and auditive content, our approach and techniques offer opportunities for non-contact, high-speed manipulation of matter, with applications in computational fabrication and biomedicine

    Tomographic X‐ray scattering based on invariant reconstruction: analysis of the 3D nanostructure of bovine bone

    Get PDF
    Small-angle X-ray scattering (SAXS) is an effective characterization technique for multi-phase nanocomposites. The structural complexity and heterogeneity of biological materials require the development of new techniques for the 3D characterization of their hierarchical structures. Emerging SAXS tomographic methods allow reconstruction of the 3D scattering pattern in each voxel but are costly in terms of synchrotron measurement time and computer time. To address this problem, an approach has been developed based on the reconstruction of SAXS invariants to allow for fast 3D characterization of nanostructured inhomogeneous materials. SAXS invariants are scalars replacing the 3D scattering patterns in each voxel, thus simplifying the 6D reconstruction problem to several 3D ones. Standard procedures for tomographic reconstruction can be directly adapted for this problem. The procedure is demonstrated by determining the distribution of the nanometric bone mineral particle thickness (T parameter) throughout a macroscopic 3D volume of bovine cortical bone. The T parameter maps display spatial patterns of particle thickness in fibrolamellar bone units. Spatial correlation between the mineral nano­structure and microscopic features reveals that the mineral particles are particularly thin in the vicinity of vascular channels

    An Interactive Concave Volume Clipping Method Based on GPU Ray Casting with Boolean Operation

    Get PDF
    Volume clipping techniques can display inner structures and avoid difficulties in specifying an appropriate transfer function. We present an interactive concave volume clipping method by implementing both rendering and Boolean operation on GPU. Common analytical convex objects, such as polyhedrons and spheres, are determined by parameters. So it consumes very little video memory to implement concave volume clipping with Boolean operations on GPU. The intersection, subtraction and union operations are implemented on GPU by converting 3D Boolean operation into 1D Boolean operation. To enhance visual effects, a pseudo color based rendering model is proposed and the Phong illumination model is enabled on the clipped surfaces. Users are allowed to select a color scheme from several schemes that are pre-defined or specified by users, to obtain clear views of inner anatomical structures. At last, several experiments were performed on a standard PC with a GeForce FX8600 graphics card. Experimental results show that the three basic Boolean operations are correctly performed, and our approach can freely clip and visualize volumetric datasets at interactive frame rates

    3D visualization of bioerosion in archaeological bone

    Get PDF
    Palaeoradiology is increasingly being used in archaeological and forensic sciences as a minimally invasive alternative to traditional histological methods for investigating bone microanatomy and its destruction by diagenetic processes. To better understand ancient mortuary practices, taphonomic studies using microCT scanning methods are gaining an ever more important role. Recently it was demonstrated that 2D virtual sections obtained by microCT scanning of intact samples are comparable to physical sections for the rating and diagnosis of bioerosion in archaeological bone. Importantly, volume image data obtained from tomographic methods also allow the rendering and analysis of 3D models. Building on these methods we provide (1) detailed descriptions of bioerosion in 3D volume renderings, virtual sections, and traditional micrographs, and (2) accessible techniques for the visualization of bioerosion in skeletal samples. The dataset is based on twenty-eight cortical bone samples, including twenty femora (of which five are cremated), two ribs, two parietals, one mandibular ramus, one humerus, and two faunal long bones from five archaeological sites in Lower Austria dating from the Early Neolithic to the Late Iron Age. Notably, we reduce the need for time-consuming image segmentation by sequentially applying two noise-reducing, edge-preserving filters, and using an image-display transfer function that visualizes bioerosion, as well as Haversian and Volkmann canal structure and density in 3D. In doing so we are also able to visualize in 3D the invasion of canals by microbiota, which has previously only been reported in 2D sections. Unlike conventional thin sections, the 3D volume images shown here are easy to create and interpret, even for archaeologists inexperienced in histology, and readily facilitate the illustration and communication of microtaphonomic effects

    A New Application for Displaying and Fusing Multimodal Data Sets

    Get PDF
    A recently developed, freely available, application specifically designed for the visualization of multimodal data sets is presented. The application allows multiple 3D data sets such as CT (x-ray computer tomography), MRI (magnetic resonance imaging), PET (positron emission tomography), and SPECT (single photon emission tomography) of the same subject to be viewed simultaneously. This is done by maintaining synchronization of the spatial location viewed within all modalities, and by providing fused views of the data where multiple data sets are displayed as a single volume. Different options for the fused views are provided by plug-ins. Plug-ins typically used include color-overlays and interlacing, but more complex plug-ins such as those based on different color spaces, and component analysis techniques are also supported. Corrections for resolution differences and user preference of contrast and brightness are made. Pre-defined and custom color tables can be used to enhance the viewing experience. In addition to these essential capabilities, multiple options are provided for mapping 16-bit data sets onto an 8-bit display, including windowing, automatically and dynamically defined tone transfer functions, and histogram based techniques. The 3D data sets can be viewed not only as a stack of images, but also as the preferred three orthogonal cross sections through the volume. More advanced volumetric displays of both individual data sets and fused views are also provided. This includes the common MIP (maximum intensity projection) both with and without depth correction for both individual data sets and multimodal data sets created using a fusion plug-in

    Towards Automatic Feature-based Visualization

    Get PDF
    Visualizations are well suited to communicate large amounts of complex data. With increasing resolution in the spatial and temporal domain simple imaging techniques meet their limits, as it is quite difficult to display multiple variables in 3D or analyze long video sequences. Feature detection techniques reduce the data-set to the essential structures and allow for a highly abstracted representation of the data. However, current feature detection algorithms commonly rely on a detailed description of each individual feature. In this paper, we present a feature-based visualization technique that is solely based on the data. Using concepts from computational mechanics and information theory, a measure, local statistical complexity, is defined that extracts distinctive structures in the data-set. Local statistical complexity assigns each position in the (multivariate) data-set a scalar value indicating regions with extraordinary behavior. Local structures with high local statistical complexity form the features of the data-set. Volume-rendering and iso-surfacing are used to visualize the automatically extracted features of the data-set. To illustrate the ability of the technique, we use examples from diffusion, and flow simulations in two and three dimensions
    corecore