272 research outputs found

    Virtual Hand Representations to Support Natural Interaction in Immersive Environment

    Get PDF
    Immersive Computing Technology (ICT) offers designers the unique ability to evaluate human interaction with product design concepts through the use of stereo viewing and 3D position tracking. These technologies provide designers with opportunities to create virtual simulations for numerous different applications. In order to support the immersive experience of a virtual simulation, it is necessary to employ interaction techniques that are appropriately mapped to specific tasks. Numerous methods for interacting in various virtual applications have been developed which use wands, game controllers, and haptic devices. However, if the intent of the simulation is to gather information on how a person would interact in an environment, more natural interaction paradigms are needed. The use of 3D hand models coupled with position-tracked gloves provide for intuitive interactions in virtual environments. This paper presents several methods of representing a virtual hand model in the virtual environment to support natural interaction

    Visual Perception and Cognition in Image-Guided Intervention

    Get PDF
    Surgical image visualization and interaction systems can dramatically affect the efficacy and efficiency of surgical training, planning, and interventions. This is even more profound in the case of minimally-invasive surgery where restricted access to the operative field in conjunction with limited field of view necessitate a visualization medium to provide patient-specific information at any given moment. Unfortunately, little research has been devoted to studying human factors associated with medical image displays and the need for a robust, intuitive visualization and interaction interfaces has remained largely unfulfilled to this day. Failure to engineer efficient medical solutions and design intuitive visualization interfaces is argued to be one of the major barriers to the meaningful transfer of innovative technology to the operating room. This thesis was, therefore, motivated by the need to study various cognitive and perceptual aspects of human factors in surgical image visualization systems, to increase the efficiency and effectiveness of medical interfaces, and ultimately to improve patient outcomes. To this end, we chose four different minimally-invasive interventions in the realm of surgical training, planning, training for planning, and navigation: The first chapter involves the use of stereoendoscopes to reduce morbidity in endoscopic third ventriculostomy. The results of this study suggest that, compared with conventional endoscopes, the detection of the basilar artery on the surface of the third ventricle can be facilitated with the use of stereoendoscopes, increasing the safety of targeting in third ventriculostomy procedures. In the second chapter, a contour enhancement technique is described to improve preoperative planning of arteriovenous malformation interventions. The proposed method, particularly when combined with stereopsis, is shown to increase the speed and accuracy of understanding the spatial relationship between vascular structures. In the third chapter, an augmented-reality system is proposed to facilitate the training of planning brain tumour resection. The results of our user study indicate that the proposed system improves subjects\u27 performance, particularly novices\u27, in formulating the optimal point of entry and surgical path independent of the sensorimotor tasks performed. In the last chapter, the role of fully-immersive simulation environments on the surgeons\u27 non-technical skills to perform vertebroplasty procedure is investigated. Our results suggest that while training surgeons may increase their technical skills, the introduction of crisis scenarios significantly disturbs the performance, emphasizing the need of realistic simulation environments as part of training curriculum

    Research on real-time physics-based deformation for haptic-enabled medical simulation

    Full text link
    This study developed a multiple effective visuo-haptic surgical engine to handle a variety of surgical manipulations in real-time. Soft tissue models are based on biomechanical experiment and continuum mechanics for greater accuracy. Such models will increase the realism of future training systems and the VR/AR/MR implementations for the operating room

    Creation of Interactive VR Application that Supports Reasoning Skills in Anatomy Education

    Get PDF
    For our creative work thesis, we developed a VR (Virtual Reality) Program that allows a user to view and interact with muscles and nerves of a canine leg that would support students to understand the relationships between nerves and muscles. Using an industry-style pipeline, we developed anatomically accurate models of canine muscles and nerves, which we textured, rigged, and animated for use in an educational virtual reality platform. The end goal of the project is to create and measure the efficacy of a visually dynamic experience for the user, allowing them to generally explore canine limb anatomy, and to specifically visualize deficits in muscle movement, produced by user interaction with the canine nervous system. This tool explores the possibilities of Virtual Reality and seek to improve upon existing methods of higher-level anatomy education. Traditionally, higher level anatomy education is taught through the use of cadaver dissections, two-dimensional anatomical diagrams and didactic lectures. However, these traditional methods of teaching anatomy have many limitations and are not enough to build a visual-spatial understanding of anatomical structures. Virtual reality is a strong tool that allows students to directly manipulate anatomical models and observe movements in a three-dimensional space. While the literature has been filled with VR applications that aim to fill this need, many existing tools offer only a static model for the user to explore by rotation, adding and subtracting layers, and viewing labels to learn about the anatomical structure. We seek to increase the level of dynamic interaction that the user has, by allowing the user’s touch of the models to change the animation and movement of the three-dimensional models in their environment. Our outcome is a VR learning tool that has potential for further exploration in higher level anatomy education. Our creative work employs the methodologies of “art-based research”. Art based research can be defined as the systematic use of the artistic process, the actual making of artistic expressions as a primary way of understanding. The project was created iteratively while working with content experts, specifically anatomy experts from Dept. of Veterinary Sciences at Texas A&M University. Implementing anatomy education using virtual reality and developing a universal pipeline for asset creation allows us the freedom to dynamically build on our application. This means that our tool can accommodate for the addition of new muscle and nerves. By continuing to develop our virtual reality application in future works, we can expand the breadth of knowledge a user can gain from interacting with our application

    Creation of Interactive VR Application that Supports Reasoning Skills in Anatomy Education

    Get PDF
    For our creative work thesis, we developed a VR (Virtual Reality) Program that allows a user to view and interact with muscles and nerves of a canine leg that would support students to understand the relationships between nerves and muscles. Using an industry-style pipeline, we developed anatomically accurate models of canine muscles and nerves, which we textured, rigged, and animated for use in an educational virtual reality platform. The end goal of the project is to create and measure the efficacy of a visually dynamic experience for the user, allowing them to generally explore canine limb anatomy, and to specifically visualize deficits in muscle movement, produced by user interaction with the canine nervous system. This tool explores the possibilities of Virtual Reality and seek to improve upon existing methods of higher-level anatomy education. Traditionally, higher level anatomy education is taught through the use of cadaver dissections, two-dimensional anatomical diagrams and didactic lectures. However, these traditional methods of teaching anatomy have many limitations and are not enough to build a visual-spatial understanding of anatomical structures. Virtual reality is a strong tool that allows students to directly manipulate anatomical models and observe movements in a three-dimensional space. While the literature has been filled with VR applications that aim to fill this need, many existing tools offer only a static model for the user to explore by rotation, adding and subtracting layers, and viewing labels to learn about the anatomical structure. We seek to increase the level of dynamic interaction that the user has, by allowing the user’s touch of the models to change the animation and movement of the three-dimensional models in their environment. Our outcome is a VR learning tool that has potential for further exploration in higher level anatomy education. Our creative work employs the methodologies of “art-based research”. Art based research can be defined as the systematic use of the artistic process, the actual making of artistic expressions as a primary way of understanding. The project was created iteratively while working with content experts, specifically anatomy experts from Dept. of Veterinary Sciences at Texas A&M University. Implementing anatomy education using virtual reality and developing a universal pipeline for asset creation allows us the freedom to dynamically build on our application. This means that our tool can accommodate for the addition of new muscle and nerves. By continuing to develop our virtual reality application in future works, we can expand the breadth of knowledge a user can gain from interacting with our application

    Creation of a Virtual Atlas of Neuroanatomy and Neurosurgical Techniques Using 3D Scanning Techniques

    Get PDF
    Neuroanatomy is one of the most challenging and fascinating topics within the human anatomy, due to the complexity and interconnection of the entire nervous system. The gold standard for learning neurosurgical anatomy is cadaveric dissections. Nevertheless, it has a high cost (needs of a laboratory, acquisition of cadavers, and fixation), is time-consuming, and is limited by sociocultural restrictions. Due to these disadvantages, other tools have been investigated to improve neuroanatomy learning. Three-dimensional modalities have gradually begun to supplement traditional 2-dimensional representations of dissections and illustrations. Volumetric models (VM) are the new frontier for neurosurgical education and training. Different workflows have been described to create these VMs -photogrammetry (PGM) and structured light scanning (SLS). In this study, we aimed to describe and use the currently available 3D scanning techniques to create a virtual atlas of neurosurgical anatomy. Dissections on post-mortem human heads and brains were performed at the skull base laboratories of Stanford University - NeuroTraIn Center and the University of California, San Francisco - SBCVL (skull base and cerebrovascular laboratory). Then VMs were created following either SLS or PGM workflow. Fiber tract reconstructions were also generated from DICOM using DSI-studio and incorporated into VMs from dissections. Moreover, common creative license materials models were used to simplify the understanding of the specific anatomical region. Both methods yielded VMs with suitable clarity and structural integrity for anatomical education, surgical illustration, and procedural simulation. We described the roadmap of SLS and PGM for creating volumetric models, including the required equipment and software. We have also provided step-by-step procedures on how users can post-processing and refine these images according to their specifications. The VMs generated were used for several publications, to describe the step-by-step of a specific neurosurgical approach and to enhance the understanding of an anatomical region and its function. These models were used in neuroanatomical education and research (workshops and publications). VMs offer a new, immersive, and innovative way to accurately visualize neuroanatomy. Given the straightforward workflow, the presently described techniques may serve as a reference point for an entirely new way of capturing and depicting neuroanatomy and offer new opportunities for the application of VMs in education, simulation, and surgical planning. The virtual atlas, divided into specific areas concerning different neurosurgical approaches (such as skull base, cortex and fiber tracts, and spine operative anatomy), will increase the viewer's understanding of neurosurgical anatomy. The described atlas is the first surgical collection of VMs from cadaveric dissections available in the medical field and could be a used as reference for future creation of analogous collection in the different medical subspeciality.La neuroanatomia è, grazie alle intricate connessioni che caratterizzano il sistema nervoso e alla sua affascinante complessità, una delle discipline più stimolanti della anatomia umana. Nonostante il gold standard per l’apprendimento dell’anatomia neurochirurgica sia ancora rappresentato dalle dissezioni cadaveriche, l’accessibilità a queste ultime rimane limitata, a causa della loro dispendiosità in termini di tempo e costi (necessità di un laboratorio, acquisizione di cadaveri e fissazione), e alle restrizioni socioculturali per la donazione di cadaveri. Al fine di far fronte a questi impedimenti, e con lo scopo di garantire su larga scala l’apprendimento tridimensionale della neuroanatomia, nel corso degli anni sono stati sviluppati nuovi strumenti e tecnologie. Le tradizionali rappresentazioni anatomiche bidimensionali sono state gradualmente sostituite dalle modalità 3-dimensionali (3D) – foto e video. Tra questi ultimi, i modelli volumetrici (VM) rappresentano la nuova frontiera per l'istruzione e la formazione neurochirurgica. Diversi metodi per creare questi VM sono stati descritti, tra cui la fotogrammetria (PGM) e la scansione a luce strutturata (SLS). Questo studio descrive l’utilizzo delle diverse tecniche di scansione 3D grazie alle quali è stato creato un atlante virtuale di anatomia neurochirurgica. Le dissezioni su teste e cervelli post-mortem sono state eseguite presso i laboratori di base cranica di Stanford University -NeuroTraIn Center e dell'Università della California, San Francisco - SBCVL. I VM dalle dissezioni sono stati creati seguendo i metodi di SLS e/o PGM. Modelli di fibra bianca sono stati generate utilizzando DICOM con il software DSI-studio e incorporati ai VM di dissezioni anatomiche. Inoltre, sono stati utilizzati VM tratti da common creative license material (materiale con licenze creative comuni) al fine di semplificare la comprensione di alcune regioni anatomiche. I VM generati con entrambi i metodi sono risultati adeguati, sia in termini di chiarezza che di integrità strutturale, per l’educazione anatomica, l’illustrazione medica e la simulazione chirurgica. Nel nostro lavoro sono stati esaustivamente descritti tutti gli step necessari, di entrambe le tecniche (SLS e PGM), per la creazione di VM, compresi le apparecchiature e i software utilizzati. Sono state inoltre descritte le tecniche di post-elaborazione e perfezionamento dei VM da poter utilizzare in base alle necessità richieste. I VM generati durante la realizzazione del nostro lavoro sono stati utilizzati per molteplici pubblicazioni, nella descrizione step-by-step di uno specifico approccio neurochirurgico o per migliorare la comprensione di una regione anatomica e della sua funzione. Questi modelli sono stati utilizzati a scopo didattico per la formazione neuroanatomica di studenti di medicina, specializzandi e giovani neurochirurghi. I VM offrono un modo nuovo, coinvolgente e innovativo con cui poter raggiungere un’accurata conoscenza tridimensionale della neuroanatomia. La metodologia delle due tecniche descritte può servire come punto di riferimento per un nuovo modo di acquisizione e rappresentazione della neuroanatomia, ed offrire nuove opportunità di utilizzo dei VM nella formazione didattica, nella simulazione e nella pianificazione chirurgica. L'atlante virtuale qui descritto, suddiviso in aree specifiche relative a diversi approcci neurochirurgici, aumenterà la comprensione dell'anatomia neurochirurgica da parte dello spettatore. Questa è la prima raccolta chirurgica di VM da dissezioni anatomiche disponibile in ambito medico e potrebbe essere utilizzato come riferimento per la futura creazione di analoga raccolta nelle diverse sotto specialità mediche

    Stereoscopic bimanual interaction for 3D visualization

    Get PDF
    Virtual Environments (VE) are being widely used in various research fields for several decades such as 3D visualization, education, training and games. VEs have the potential to enhance the visualization and act as a general medium for human-computer interaction (HCI). However, limited research has evaluated virtual reality (VR) display technologies, monocular and binocular depth cues, for human depth perception of volumetric (non-polygonal) datasets. In addition, a lack of standardization of three-dimensional (3D) user interfaces (UI) makes it challenging to interact with many VE systems. To address these issues, this dissertation focuses on evaluation of effects of stereoscopic and head-coupled displays on depth judgment of volumetric dataset. It also focuses on evaluation of a two-handed view manipulation techniques which support simultaneous 7 degree-of-freedom (DOF) navigation (x,y,z + yaw,pitch,roll + scale) in a multi-scale virtual environment (MSVE). Furthermore, this dissertation evaluates auto-adjustment of stereo view parameters techniques for stereoscopic fusion problems in a MSVE. Next, this dissertation presents a bimanual, hybrid user interface which combines traditional tracking devices with computer-vision based "natural" 3D inputs for multi-dimensional visualization in a semi-immersive desktop VR system. In conclusion, this dissertation provides a guideline for research design for evaluating UI and interaction techniques

    Determination of critical factors for fast and accurate 2D medical image deformation

    Get PDF
    The advent of medical imaging technology enabled physicians to study patient anatomy non-invasively and revolutionized the medical community. As medical images have become digitized and the resolution of these images has increased, software has been developed to allow physicians to explore their patients\u27 image studies in an increasing number of ways by allowing viewing and exploration of reconstructed three-dimensional models. Although this has been a boon to radiologists, who specialize in interpreting medical images, few software packages exist that provide fast and intuitive interaction for other physicians. In addition, although the users of these applications can view their patient data at the time the scan was taken, the placement of the tissues during a surgical intervention is often different due to the position of the patient and methods used to provide a better view of the surgical field. None of the commonly available medical image packages allow users to predict the deformation of the patient\u27s tissues under those surgical conditions. This thesis analyzes the performance and accuracy of a less computationally intensive yet physically-based deformation algorithm- the extended ChainMail algorithm. The proposed method allows users to load DICOM images from medical image studies, interactively classify the tissues in those images according to their properties under deformation, deform the tissues in two dimensions, and visualize the result. The method was evaluated using data provided by the Truth Cube experiment, where a phantom made of material with properties similar to liver under deformation was placed under varying amounts of uniaxial strain. CT scans were before and after the deformations. The deformation was performed on a single DICOM image from the study that had been manually classified as well as on data sets generated from that original image. These generated data sets were ideally segmented versions of the phantom images that had been scaled to varying fidelities in order to evaluate the effect of image size on the algorithm\u27s accuracy and execution time. Two variations of the extended ChainMail algorithm parameters were also implemented for each of the generated data sets in order to examine the effect of the parameters. The resultant deformations were compared with the actual deformations as determined by the Truth Cube experimenters. For both variations of the algorithm parameters, the predicted deformations at 5% uniaxial strain had an RMS error of a similar order of magnitude to the errors in a finite element analysis performed by the truth cube experimenters for the deformations at 18.25% strain. The average error was able to be reduced by approximately between 10-20% for the lower fidelity data sets through the use of one of the parameter schemes, although the benefit decreased as the image size increased. When the algorithm was evaluated under 18.25% strain, the average errors were more than 8 y times that of the errors in the finite element analysis. Qualitative analysis of the deformed images indicated differing degrees of accuracy across the ideal image set, with the largest displacements estimated closer to the initial point of deformation. This is hypothesized to be a result of the order in which deformation was processed for points in the image. The algorithm execution time was examined for the varying generated image fidelities. For a generated image that was approximately 18.5% of the size of the tissue in the original image, the execution time was less than 15 seconds. In comparison, the algorithm processing time for the full-scale image was over 3 y hours. The analysis of the extended ChainMail algorithm for use in medical image deformation emphasizes the importance of the choice of algorithm parameters on the accuracy of the deformations and of data set size on the processing time
    • …
    corecore