435 research outputs found

    Analysis of (iso)surface reconstructions: Quantitative metrics and methods

    Get PDF
    Due to sampling processes volumetric data is inherently discrete and most often knowledge of the underlying continuous model is not available. Surface rendering techniques attempt to reconstruct the continuous model, using isosurfaces, from the discrete data. Therefore, it natural to ask how accurate the reconstructed isosurfaces are with respect to the underlying continuous model. A reconstructed isosurface may look impressive when rendered ( photorealism ), but how well does it reflect reality ( physical realism )?;The users of volume visualization packages must be aware of the short-comings of the algorithms used to produce the images so that they may properly interpret, and interact with, what they see. However, very little work has been done to quantify the accuracy of volumetric data reconstructions. Most analysis to date has been qualitative. Qualitative analysis uses simple visual inspection to determine whether characteristics, known to exist in the real world object, are present in the rendered image. Our research suggests metrics and methods for quantifying the physical realism of reconstructed isosurfaces.;Physical realism is a many faceted notion. In fact, a different metric could be defined for each physical property one wishes to consider. We have defined four metrics--Global Surface Area Preservation (GSAP), Volume Preservation (VP), Point Distance Preservation (PDP), and Isovalue Preservation (IVP). We present experimental results for each of these metrics and discuss their validity with respect to those results.;We also present the Reconstruction Quantification (sub)System (RQS). RQS provides a flexible framework for measuring physical realism. This system can be embedded in existing visualization systems with little modification of the system itself. Two types of analysis can be performed; reconstruction analysis and algorithm analysis. Reconstruction analysis allows users to determine the accuracy of individual surface reconstructions. Algorithm analysis, on the other hand, allows developers of visualization systems to determine the efficacy of the visualization system based on several reconstructions

    Supporting Focus and Context Awareness in 3D Modelling Tasks Using Multi-Layered Displays

    Get PDF
    Most 3D modelling software have been developed for conventional 2D displays, and as such, lack support for true depth perception. This contributes to making polygonal 3D modelling tasks challenging, particularly when models are complex and consist of a large number of overlapping components (e.g. vertices, edges) and objects (i.e. parts). Research has shown that users of 3D modelling software often encounter a range of difficulties, which collectively can be defined as focus and context awareness problems. These include maintaining position and orientation awarenesses, as well as recognizing distance between individual components and objects in 3D spaces. In this paper, we present five visualization and interaction techniques we have developed for multi-layered displays, to better support focus and context awareness in 3D modelling tasks. The results of a user study we conducted shows that three of these five techniques improve users' 3D modelling task performance

    Surface Shape Perception in Volumetric Stereo Displays

    Get PDF
    In complex volume visualization applications, understanding the displayed objects and their spatial relationships is challenging for several reasons. One of the most important obstacles is that these objects can be translucent and can overlap spatially, making it difficult to understand their spatial structures. However, in many applications, for example medical visualization, it is crucial to have an accurate understanding of the spatial relationships among objects. The addition of visual cues has the potential to help human perception in these visualization tasks. Descriptive line elements, in particular, have been found to be effective in conveying shape information in surface-based graphics as they sparsely cover a geometrical surface, consistently following the geometry. We present two approaches to apply such line elements to a volume rendering process and to verify their effectiveness in volume-based graphics. This thesis reviews our progress to date in this area and discusses its effects and limitations. Specifically, it examines the volume renderer implementation that formed the foundation of this research, the design of the pilot study conducted to investigate the effectiveness of this technique, the results obtained. It further discusses improvements designed to address the issues revealed by the statistical analysis. The improved approach is able to handle visualization targets with general shapes, thus making it more appropriate to real visualization applications involving complex objects

    Medical Data Visual Synchronization and Information interaction Using Internet-based Graphics Rendering and Message-oriented Streaming

    Get PDF
    The rapid technology advances in medical devices make possible the generation of vast amounts of data, which contain massive quantities of diagnostic information. Interactively accessing and sharing the acquired data on the Internet is critically important in telemedicine. However, due to the lack of efficient algorithms and high computational cost, collaborative medical data exploration on the Internet is still a challenging task in clinical settings. Therefore, we develop a web-based medical image rendering and visual synchronization software platform, in which novel algorithms are created for parallel data computing and image feature enhancement, where Node.js and Socket.IO libraries are utilized to establish bidirectional connections between server and clients in real time. In addition, we design a new methodology to stream medical information among all connected users, whose identities and input messages can be automatically stored in database and extracted in web browsers. The presented software framework will provide multiple medical practitioners with immediate visual feedback and interactive information in applications such as collaborative therapy planning, distributed treatment, and remote clinical health care

    Fusion and visualization of intraoperative cortical images with preoperative models for epilepsy surgical planning and guidance.

    Get PDF
    OBJECTIVE: During epilepsy surgery it is important for the surgeon to correlate the preoperative cortical morphology (from preoperative images) with the intraoperative environment. Augmented Reality (AR) provides a solution for combining the real environment with virtual models. However, AR usually requires the use of specialized displays, and its effectiveness in the surgery still needs to be evaluated. The objective of this research was to develop an alternative approach to provide enhanced visualization by fusing a direct (photographic) view of the surgical field with the 3D patient model during image guided epilepsy surgery. MATERIALS AND METHODS: We correlated the preoperative plan with the intraoperative surgical scene, first by a manual landmark-based registration and then by an intensity-based perspective 3D-2D registration for camera pose estimation. The 2D photographic image was then texture-mapped onto the 3D preoperative model using the solved camera pose. In the proposed method, we employ direct volume rendering to obtain a perspective view of the brain image using GPU-accelerated ray-casting. The algorithm was validated by a phantom study and also in the clinical environment with a neuronavigation system. RESULTS: In the phantom experiment, the 3D Mean Registration Error (MRE) was 2.43 ± 0.32 mm with a success rate of 100%. In the clinical experiment, the 3D MRE was 5.15 ± 0.49 mm with 2D in-plane error of 3.30 ± 1.41 mm. A clinical application of our fusion method for enhanced and augmented visualization for integrated image and functional guidance during neurosurgery is also presented. CONCLUSIONS: This paper presents an alternative approach to a sophisticated AR environment for assisting in epilepsy surgery, whereby a real intraoperative scene is mapped onto the surface model of the brain. In contrast to the AR approach, this method needs no specialized display equipment. Moreover, it requires minimal changes to existing systems and workflow, and is therefore well suited to the OR environment. In the phantom and in vivo clinical experiments, we demonstrate that the fusion method can achieve a level of accuracy sufficient for the requirements of epilepsy surgery

    Platform Independent Real-Time X3D Shaders and their Applications in Bioinformatics Visualization

    Get PDF
    Since the introduction of programmable Graphics Processing Units (GPUs) and procedural shaders, hardware vendors have each developed their own individual real-time shading language standard. None of these shading languages is fully platform independent. Although this real-time programmable shader technology could be developed into 3D application on a single system, this platform dependent limitation keeps the shader technology away from 3D Internet applications. The primary purpose of this dissertation is to design a framework for translating different shader formats to platform independent shaders and embed them into the eXtensible 3D (X3D) scene for 3D web applications. This framework includes a back-end core shader converter, which translates shaders among different shading languages with a middle XML layer. Also included is a shader library containing a basic set of shaders that developers can load and add shaders to. This framework will then be applied to some applications in Biomolecular Visualization

    Integration of multiple data types in 3-D immersive virtual reality (VR) environments

    Get PDF
    Intelligent sensors have begun to play a key part in the monitoring and maintenance of complex infrastructures. Sensors have the capability not only to provide raw data, but also provide information by indicating the reliability of the measurements. The effect of this added information is a voluminous increase in the total data that is gathered. If an operator is required to perceive the state of a complex system, novel methods must be developed for sifting through enormous data sets. Virtual reality (VR) platforms are proposed as ideal candidates for performing this task-- a virtual world will allow the user to experience a complex system that is gathering a multitude of sensor data and are referred as Integrated Awareness models. This thesis presents techniques for visualizing such multiple data sets, specifically - graphical, measurement and health data inside a 3-D VR environment. The focus of this thesis is to develop pathways to generate the required 3-D models without sacrificing visual fidelity. The tasks include creating the visual representation, integrating multi-sensor measurements, creating user-specific visualizations and a performance evaluation of the completed virtual environment

    Development and evaluation of a novel method for in-situ medical image display

    Get PDF
    Three-dimensional (3D) medical imaging, including computed tomography (CT) and magnetic resonance (MR), and other modalities, has become a standard of care for diagnosis of disease and guidance of interventional procedures. As the technology to acquire larger, more magnificent, and more informative medical images advances, so too must the technology to display, interact with, and interpret these data.This dissertation concerns the development and evaluation of a novel method for interaction with 3D medical images called "grab-a-slice," which is a movable, tracked stereo display. It is the latest in a series of displays developed in our laboratory that we describe as in-situ, meaning that the displayed image is embedded in a physical 3D coordinate system. As the display is moved through space, a continuously updated tomographic slice of a 3D medical image is shown on the screen, corresponding to the position and orientation of the display. The act of manipulating the display through a "virtual patient" preserves the perception of 3D anatomic relationships in a way that is not possible with conventional, fixed displays. The further addition of stereo display capabilities permits augmentation of the tomographic image data with out-of-plane structures using 3D graphical methods.In this dissertation we describe the research and clinical motivations for such a device. We describe the technical development of grab-a-slice as well as psychophysical experiments to evaluate the hypothesized perceptual and cognitive benefits. We speculate on the advantages and limitations of the grab-a-slice display and propose future directions for its use in psychophysical research, clinical settings, and image analysis

    HAPTIC AND VISUAL SIMULATION OF BONE DISSECTION

    Get PDF
    Marco AgusIn bone dissection virtual simulation, force restitution represents the key to realistically mimicking a patient– specific operating environment. The force is rendered using haptic devices controlled by parametrized mathematical models that represent the bone–burr contact. This dissertation presents and discusses a haptic simulation of a bone cutting burr, that it is being developed as a component of a training system for temporal bone surgery. A physically based model was used to describe the burr– bone interaction, including haptic forces evaluation, bone erosion process and resulting debris. The model was experimentally validated and calibrated by employing a custom experimental set–up consisting of a force–controlled robot arm holding a high–speed rotating tool and a contact force measuring apparatus. Psychophysical testing was also carried out to assess individual reaction to the haptic environment. The results suggest that the simulator is capable of rendering the basic material differences required for bone burring tasks. The current implementation, directly operating on a voxel discretization of patientspecific 3D CT and MR imaging data, is efficient enough to provide real–time haptic and visual feedback on a low–end multi–processing PC platform.
    • 

    corecore