4,955 research outputs found

    Developing student spatial ability with 3D software applications

    No full text
    This paper reports on the design of a library of software applications for the teaching and learning of spatial geometry and visual thinking. The core objective of these applications is the development of a set of dynamic microworlds, which enables (i) students to construct, observe and manipulate configurations in space, (ii) students to study different solids and relates them to their corresponding nets, and (iii) students to promote their visualization skills through the process of constructing dynamic visual images. During the developmental process of software applications the key elements of spatial ability and visualization (mental images, external representations, processes, and abilities of visualization) are carefully taken into consideration

    A Descriptive Framework for Temporal Data Visualizations Based on Generalized Space-Time Cubes

    Get PDF
    International audienceWe present the generalized space-time cube, a descriptive model for visualizations of temporal data. Visualizations are described as operations on the cube, which transform the cube's 3D shape into readable 2D visualizations. Operations include extracting subparts of the cube, flattening it across space or time or transforming the cubes geometry and content. We introduce a taxonomy of elementary space-time cube operations and explain how these operations can be combined and parameterized. The generalized space-time cube has two properties: (1) it is purely conceptual without the need to be implemented, and (2) it applies to all datasets that can be represented in two dimensions plus time (e.g. geo-spatial, videos, networks, multivariate data). The proper choice of space-time cube operations depends on many factors, for example, density or sparsity of a cube. Hence, we propose a characterization of structures within space-time cubes, which allows us to discuss strengths and limitations of operations. We finally review interactive systems that support multiple operations, allowing a user to customize his view on the data. With this framework, we hope to facilitate the description, criticism and comparison of temporal data visualizations, as well as encourage the exploration of new techniques and systems. This paper is an extension of Bach et al.'s (2014) work

    Real-time hybrid cutting with dynamic fluid visualization for virtual surgery

    Get PDF
    It is widely accepted that a reform in medical teaching must be made to meet today's high volume training requirements. Virtual simulation offers a potential method of providing such trainings and some current medical training simulations integrate haptic and visual feedback to enhance procedure learning. The purpose of this project is to explore the capability of Virtual Reality (VR) technology to develop a training simulator for surgical cutting and bleeding in a general surgery

    A novel haptic model and environment for maxillofacial surgical operation planning and manipulation

    Get PDF
    This paper presents a practical method and a new haptic model to support manipulations of bones and their segments during the planning of a surgical operation in a virtual environment using a haptic interface. To perform an effective dental surgery it is important to have all the operation related information of the patient available beforehand in order to plan the operation and avoid any complications. A haptic interface with a virtual and accurate patient model to support the planning of bone cuts is therefore critical, useful and necessary for the surgeons. The system proposed uses DICOM images taken from a digital tomography scanner and creates a mesh model of the filtered skull, from which the jaw bone can be isolated for further use. A novel solution for cutting the bones has been developed and it uses the haptic tool to determine and define the bone-cutting plane in the bone, and this new approach creates three new meshes of the original model. Using this approach the computational power is optimized and a real time feedback can be achieved during all bone manipulations. During the movement of the mesh cutting, a novel friction profile is predefined in the haptical system to simulate the force feedback feel of different densities in the bone

    Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets

    Get PDF
    This report describes our project activities for the period Sep. 1991 - Oct. 1992. Our activities included stabilizing the software system STAR, porting STAR to IDL/widgets (improved user interface), targeting new visualization techniques for multi-dimensional data visualization (emphasizing 3D visualization), and exploring leading-edge 3D interface devices. During the past project year we emphasized high-end visualization techniques, by exploring new tools offered by state-of-the-art visualization software (such as AVS3 and IDL4/widgets), by experimenting with tools still under research at the Department of Computer Science (e.g., use of glyphs for multidimensional data visualization), and by researching current 3D input/output devices as they could be used to explore 3D astrophysical data. As always, any project activity is driven by the need to interpret astrophysical data more effectively

    Determination of critical factors for fast and accurate 2D medical image deformation

    Get PDF
    The advent of medical imaging technology enabled physicians to study patient anatomy non-invasively and revolutionized the medical community. As medical images have become digitized and the resolution of these images has increased, software has been developed to allow physicians to explore their patients\u27 image studies in an increasing number of ways by allowing viewing and exploration of reconstructed three-dimensional models. Although this has been a boon to radiologists, who specialize in interpreting medical images, few software packages exist that provide fast and intuitive interaction for other physicians. In addition, although the users of these applications can view their patient data at the time the scan was taken, the placement of the tissues during a surgical intervention is often different due to the position of the patient and methods used to provide a better view of the surgical field. None of the commonly available medical image packages allow users to predict the deformation of the patient\u27s tissues under those surgical conditions. This thesis analyzes the performance and accuracy of a less computationally intensive yet physically-based deformation algorithm- the extended ChainMail algorithm. The proposed method allows users to load DICOM images from medical image studies, interactively classify the tissues in those images according to their properties under deformation, deform the tissues in two dimensions, and visualize the result. The method was evaluated using data provided by the Truth Cube experiment, where a phantom made of material with properties similar to liver under deformation was placed under varying amounts of uniaxial strain. CT scans were before and after the deformations. The deformation was performed on a single DICOM image from the study that had been manually classified as well as on data sets generated from that original image. These generated data sets were ideally segmented versions of the phantom images that had been scaled to varying fidelities in order to evaluate the effect of image size on the algorithm\u27s accuracy and execution time. Two variations of the extended ChainMail algorithm parameters were also implemented for each of the generated data sets in order to examine the effect of the parameters. The resultant deformations were compared with the actual deformations as determined by the Truth Cube experimenters. For both variations of the algorithm parameters, the predicted deformations at 5% uniaxial strain had an RMS error of a similar order of magnitude to the errors in a finite element analysis performed by the truth cube experimenters for the deformations at 18.25% strain. The average error was able to be reduced by approximately between 10-20% for the lower fidelity data sets through the use of one of the parameter schemes, although the benefit decreased as the image size increased. When the algorithm was evaluated under 18.25% strain, the average errors were more than 8 y times that of the errors in the finite element analysis. Qualitative analysis of the deformed images indicated differing degrees of accuracy across the ideal image set, with the largest displacements estimated closer to the initial point of deformation. This is hypothesized to be a result of the order in which deformation was processed for points in the image. The algorithm execution time was examined for the varying generated image fidelities. For a generated image that was approximately 18.5% of the size of the tissue in the original image, the execution time was less than 15 seconds. In comparison, the algorithm processing time for the full-scale image was over 3 y hours. The analysis of the extended ChainMail algorithm for use in medical image deformation emphasizes the importance of the choice of algorithm parameters on the accuracy of the deformations and of data set size on the processing time
    • …
    corecore