1,098 research outputs found

    Real-time hybrid cutting with dynamic fluid visualization for virtual surgery

    Get PDF
    It is widely accepted that a reform in medical teaching must be made to meet today's high volume training requirements. Virtual simulation offers a potential method of providing such trainings and some current medical training simulations integrate haptic and visual feedback to enhance procedure learning. The purpose of this project is to explore the capability of Virtual Reality (VR) technology to develop a training simulator for surgical cutting and bleeding in a general surgery

    SOFA: A Multi-Model Framework for Interactive Physical Simulation

    Get PDF
    International audienceSOFA (Simulation Open Framework Architecture) is an open-source C++ library primarily targeted at interactive computational medical simulation. SOFA facilitates collaborations between specialists from various domains, by decomposing complex simulators into components designed independently and organized in a scenegraph data structure. Each component encapsulates one of the aspects of a simulation, such as the degrees of freedom, the forces and constraints, the differential equations, the main loop algorithms, the linear solvers, the collision detection algorithms or the interaction devices. The simulated objects can be represented using several models, each of them optimized for a different task such as the computation of internal forces, collision detection, haptics or visual display. These models are synchronized during the simulation using a mapping mechanism. CPU and GPU implementations can be transparently combined to exploit the computational power of modern hardware architectures. Thanks to this flexible yet efficient architecture, \sofa{} can be used as a test-bed to compare models and algorithms, or as a basis for the development of complex, high-performance simulators

    Development of a Reality-Based, Haptics-Enabled Simulator for Tool-Tissue Interactions

    Get PDF
    The advent of complex surgical procedures has driven the need for finite element based surgical training simulators which provide realistic visual and haptic feedback throughout the surgical task. The foundation of a simulator stems from the use of accurate, reality-based models for the global tissue response as well as the tool-tissue interactions. To that end, ex vivo and in vivo tests were conducted for soft-tissue probing and in vivo tests were conducted for soft-tissue cutting for the purpose of model development. In formulating a surgical training system, there is a desire to replicate the surgical task as accurately as possible for haptic and visual realism. However, for many biological tissues, there is a discrepancy between the mechanical characteristics of ex vivo and in vivo tissue. The efficacy of utilizing an ex vivo model for simulation of in vivo probing tasks on porcine liver was evaluated by comparing the simulated probing task to an identical in vivo probing experiment. The models were then further improved upon to better replicate the in vivo response. During the study of cutting modeling, in vivo cutting experiments were performed on porcine liver to derive the force-displacement response of the tissue to a scalpel blade. Using this information, a fracture mechanics based approach was applied to develop a fully defined cohesive zone model governing the separation properties of the liver directly in front of the scalpel blade. Further, a method of scaling the cohesive zone parameters was presented to minimize the computational expense in an effort to apply the cohesive based cutting approach to real-time simulators. The development of the models for the global tissue response and local tool-tissue interactions for probing and cutting of soft-tissue provided the framework for real-time simulation of basic surgical skills training. Initially, a pre-processing approach was used for the development of reality-based, haptics enabled simulators for probing and cutting of soft tissue. Then a real-time finite element based simulator was developed to simulate the probing task without the need to know the tool path prior to simulation

    Volumetric Lattice Boltzmann Method for Wall Stresses of Image-Based Pulsatile Flows

    Get PDF
    Image-based computational fluid dynamics (CFD) has become a new capability for determining wall stresses of pulsatile flows. However, a computational platform that directly connects image information to pulsatile wall stresses is lacking. Prevailing methods rely on manual crafting of a hodgepodge of multidisciplinary software packages, which is usually laborious and error-prone. We present a new computational platform, to compute wall stresses in image-based pulsatile flows using the volumetric lattice Boltzmann method (VLBM). The novelty includes: (1) a unique image processing to extract flow domain and local wall normality, (2) a seamless connection between image extraction and VLBM, (3) an en-route calculation of strain-rate tensor, and (4) GPU acceleration (not included here). We first generalize the streaming operation in the VLBM and then conduct application studies to demonstrate its reliability and applicability. A benchmark study is for laminar and turbulent pulsatile flows in an image-based pipe (Reynolds number: 10 to 5000). The computed pulsatile velocity and shear stress are in good agreements with Womersley\u27s analytical solutions for laminar pulsatile flows and concurrent laboratory measurements for turbulent pulsatile flows. An application study is to quantify the pulsatile hemodynamics in image-based human vertebral and carotid arteries including velocity vector, pressure, and wall-shear stress. The computed velocity vector fields are in reasonably well agreement with MRA (magnetic resonance angiography) measured ones. This computational platform is good for image-based CFD with medical applications and pore-scale porous media flows in various natural and engineering systems

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes
    • …
    corecore