4,865 research outputs found

    Interacting with the biomolecular solvent accessible surface via a haptic feedback device

    Get PDF
    Background: From the 1950s computer based renderings of molecules have been produced to aid researchers in their understanding of biomolecular structure and function. A major consideration for any molecular graphics software is the ability to visualise the three dimensional structure of the molecule. Traditionally, this was accomplished via stereoscopic pairs of images and later realised with three dimensional display technologies. Using a haptic feedback device in combination with molecular graphics has the potential to enhance three dimensional visualisation. Although haptic feedback devices have been used to feel the interaction forces during molecular docking they have not been used explicitly as an aid to visualisation. Results: A haptic rendering application for biomolecular visualisation has been developed that allows the user to gain three-dimensional awareness of the shape of a biomolecule. By using a water molecule as the probe, modelled as an oxygen atom having hard-sphere interactions with the biomolecule, the process of exploration has the further benefit of being able to determine regions on the molecular surface that are accessible to the solvent. This gives insight into how awkward it is for a water molecule to gain access to or escape from channels and cavities, indicating possible entropic bottlenecks. In the case of liver alcohol dehydrogenase bound to the inhibitor SAD, it was found that there is a channel just wide enough for a single water molecule to pass through. Placing the probe coincident with crystallographic water molecules suggests that they are sometimes located within small pockets that provide a sterically stable environment irrespective of hydrogen bonding considerations. Conclusion: By using the software, named HaptiMol ISAS (available from http://​www.​haptimol.​co.​uk), one can explore the accessible surface of biomolecules using a three-dimensional input device to gain insights into the shape and water accessibility of the biomolecular surface that cannot be so easily attained using conventional molecular graphics software

    Virtual reality training and assessment in laparoscopic rectum surgery

    Get PDF
    Background: Virtual-reality (VR) based simulation techniques offer an efficient and low cost alternative to conventional surgery training. This article describes a VR training and assessment system in laparoscopic rectum surgery. Methods: To give a realistic visual performance of interaction between membrane tissue and surgery tools, a generalized cylinder based collision detection and a multi-layer mass-spring model are presented. A dynamic assessment model is also designed for hierarchy training evaluation. Results: With this simulator, trainees can operate on the virtual rectum with both visual and haptic sensation feedback simultaneously. The system also offers surgeons instructions in real time when improper manipulation happens. The simulator has been tested and evaluated by ten subjects. Conclusions: This prototype system has been verified by colorectal surgeons through a pilot study. They believe the visual performance and the tactile feedback are realistic. It exhibits the potential to effectively improve the surgical skills of trainee surgeons and significantly shorten their learning curve. © 2014 John Wiley & Sons, Ltd

    Multimodal virtual reality versus printed medium in visualization for blind people

    Get PDF
    In this paper, we describe a study comparing the strengths of a multimodal Virtual Reality (VR) interface against traditional tactile diagrams in conveying information to visually impaired and blind people. The multimodal VR interface consists of a force feedback device (SensAble PHANTOM), synthesized speech and non-speech audio. Potential advantages of the VR technology are well known however its real usability in comparison with the conventional paper-based medium is seldom investigated. We have addressed this issue in our evaluation. The experimental results show benefits from using the multimodal approach in terms of more accurate information about the graphs obtained by users

    A novel haptic model and environment for maxillofacial surgical operation planning and manipulation

    Get PDF
    This paper presents a practical method and a new haptic model to support manipulations of bones and their segments during the planning of a surgical operation in a virtual environment using a haptic interface. To perform an effective dental surgery it is important to have all the operation related information of the patient available beforehand in order to plan the operation and avoid any complications. A haptic interface with a virtual and accurate patient model to support the planning of bone cuts is therefore critical, useful and necessary for the surgeons. The system proposed uses DICOM images taken from a digital tomography scanner and creates a mesh model of the filtered skull, from which the jaw bone can be isolated for further use. A novel solution for cutting the bones has been developed and it uses the haptic tool to determine and define the bone-cutting plane in the bone, and this new approach creates three new meshes of the original model. Using this approach the computational power is optimized and a real time feedback can be achieved during all bone manipulations. During the movement of the mesh cutting, a novel friction profile is predefined in the haptical system to simulate the force feedback feel of different densities in the bone

    Haptic Interaction with 3D oriented point clouds on the GPU

    Get PDF
    Real-time point-based rendering and interaction with virtual objects is gaining popularity and importance as diïżœerent haptic devices and technologies increasingly provide the basis for realistic interaction. Haptic Interaction is being used for a wide range of applications such as medical training, remote robot operators, tactile displays and video games. Virtual object visualization and interaction using haptic devices is the main focus; this process involves several steps such as: Data Acquisition, Graphic Rendering, Haptic Interaction and Data Modiïżœcation. This work presents a framework for Haptic Interaction using the GPU as a hardware accelerator, and includes an approach for enabling the modiïżœcation of data during interaction. The results demonstrate the limits and capabilities of these techniques in the context of volume rendering for haptic applications. Also, the use of dynamic parallelism as a technique to scale the number of threads needed from the accelerator according to the interaction requirements is studied allowing the editing of data sets of up to one million points at interactive haptic frame rates

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    Multi-touch 3D Exploratory Analysis of Ocean Flow Models

    Get PDF
    Modern ocean flow simulations are generating increasingly complex, multi-layer 3D ocean flow models. However, most researchers are still using traditional 2D visualizations to visualize these models one slice at a time. Properly designed 3D visualization tools can be highly effective for revealing the complex, dynamic flow patterns and structures present in these models. However, the transition from visualizing ocean flow patterns in 2D to 3D presents many challenges, including occlusion and depth ambiguity. Further complications arise from the interaction methods required to navigate, explore, and interact with these 3D datasets. We present a system that employs a combination of stereoscopic rendering, to best reveal and illustrate 3D structures and patterns, and multi-touch interaction, to allow for natural and efficient navigation and manipulation within the 3D environment. Exploratory visual analysis is facilitated through the use of a highly-interactive toolset which leverages a smart particle system. Multi-touch gestures allow users to quickly position dye emitting tools within the 3D model. Finally, we illustrate the potential applications of our system through examples of real world significance
    • 

    corecore