13,956 research outputs found

    Meshless Mechanics and Point-Based Visualization Methods for Surgical Simulations

    Get PDF
    Computer-based modeling and simulation practices have become an integral part of the medical education field. For surgical simulation applications, realistic constitutive modeling of soft tissue is considered to be one of the most challenging aspects of the problem, because biomechanical soft-tissue models need to reflect the correct elastic response, have to be efficient in order to run at interactive simulation rates, and be able to support operations such as cuts and sutures. Mesh-based solutions, where the connections between the individual degrees of freedom (DoF) are defined explicitly, have been the traditional choice to approach these problems. However, when the problem under investigation contains a discontinuity that disrupts the connectivity between the DoFs, the underlying mesh structure has to be reconfigured in order to handle the newly introduced discontinuity correctly. This reconfiguration for mesh-based techniques is typically called dynamic remeshing, and most of the time it causes the performance bottleneck in the simulation. In this dissertation, the efficiency of point-based meshless methods is investigated for both constitutive modeling of elastic soft tissues and visualization of simulation objects, where arbitrary discontinuities/cuts are applied to the objects in the context of surgical simulation. The point-based deformable object modeling problem is examined in three functional aspects: modeling continuous elastic deformations with, handling discontinuities in, and visualizing a point-based object. Algorithmic and implementation details of the presented techniques are discussed in the dissertation. The presented point-based techniques are implemented as separate components and integrated into the open-source software framework SOFA. The presented meshless continuum mechanics model of elastic tissue were verified by comparing it to the Hertzian non-adhesive frictionless contact theory. Virtual experiments were setup with a point-based deformable block and a rigid indenter, and force-displacement curves obtained from the virtual experiments were compared to the theoretical solutions. The meshless mechanics model of soft tissue and the integrated novel discontinuity treatment technique discussed in this dissertation allows handling cuts of arbitrary shape. The implemented enrichment technique not only modifies the internal mechanics of the soft tissue model, but also updates the point-based visual representation in an efficient way preventing the use of costly dynamic remeshing operations

    HAPTIC AND VISUAL SIMULATION OF BONE DISSECTION

    Get PDF
    Marco AgusIn bone dissection virtual simulation, force restitution represents the key to realistically mimicking a patient– specific operating environment. The force is rendered using haptic devices controlled by parametrized mathematical models that represent the bone–burr contact. This dissertation presents and discusses a haptic simulation of a bone cutting burr, that it is being developed as a component of a training system for temporal bone surgery. A physically based model was used to describe the burr– bone interaction, including haptic forces evaluation, bone erosion process and resulting debris. The model was experimentally validated and calibrated by employing a custom experimental set–up consisting of a force–controlled robot arm holding a high–speed rotating tool and a contact force measuring apparatus. Psychophysical testing was also carried out to assess individual reaction to the haptic environment. The results suggest that the simulator is capable of rendering the basic material differences required for bone burring tasks. The current implementation, directly operating on a voxel discretization of patientspecific 3D CT and MR imaging data, is efficient enough to provide real–time haptic and visual feedback on a low–end multi–processing PC platform.

    Development of a Virtual Laboratory for the Study of Complex Human Behavior

    Get PDF
    The study of human perception has evolved from examining simple tasks executed in reduced laboratory conditions to the examination of complex, real-world behaviors. Virtual environments represent the next evolutionary step by allowing full stimulus control and repeatability for human subjects, and a testbed for evaluating models of human behavior. Visual resolution varies dramatically across the visual field, dropping orders of magnitude from central to peripheral vision. Humans move their gaze about a scene several times every second, projecting taskcritical areas of the scene onto the central retina. These eye movements are made even when the immediate task does not require high spatial resolution. Such “attentionally-driven” eye movements are important because they provide an externally observable marker of the way subjects deploy their attention while performing complex, real-world tasks. Tracking subjects’ eye movements while they perform complex tasks in virtual environments provides a window into perception. In addition to the ability to track subjects’ eyes in virtual environments, concurrent EEG recording provides a further indicator of cognitive state. We have developed a virtual reality laboratory in which head-mounted displays (HMDs) are instrumented with infrared video-based eyetrackers to monitor subjects’ eye movements while they perform a range of complex tasks such as driving, and manual tasks requiring careful eye-hand coordination. A go-kart mounted on a 6DOF motion platform provides kinesthetic feedback to subjects as they drive through a virtual town; a dual-haptic interface consisting of two SensAble Phantom extended range devices allows free motion and realistic force-feedback within a 1^3 m volume (Refer to PDF file for exact formulas)

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes

    A Data-Virtualization System for Large Model Visualization

    Get PDF
    Interactive scientific visualizations are widely used for the visual exploration and examination of physical data resulting from measurements or simulations. Driven by technical advancements of data acquisition and simulation technologies, especially in the geo-scientific domain, large amounts of highly detailed subsurface data are generated. The oil and gas industry is particularly pushing such developments as hydrocarbon reservoirs are increasingly difficult to discover and exploit. Suitable visualization techniques are vital for the discovery of the reservoirs as well as their development and production. However, the ever-growing scale and complexity of geo-scientific data sets result in an expanding disparity between the size of the data and the capabilities of current computer systems with regard to limited memory and computing resources. In this thesis we present a unified out-of-core data-virtualization system supporting geo-scientific data sets consisting of multiple large seismic volumes and height-field surfaces, wherein each data set may exceed the size of the graphics memory or possibly even the main memory. Current data sets fall within the range of hundreds of gigabytes up to terabytes in size. Through the mutual utilization of memory and bandwidth resources by multiple data sets, our data-management system is able to share and balance limited system resources among different data sets. We employ multi-resolution methods based on hierarchical octree and quadtree data structures to generate level-of-detail working sets of the data stored in main memory and graphics memory for rendering. The working set generation in our system is based on a common feedback mechanism with inherent support for translucent geometric and volumetric data sets. This feedback mechanism collects information about required levels of detail during the rendering process and is capable of directly resolving data visibility without the application of any costly occlusion culling approaches. A central goal of the proposed out-of-core data management system is an effective virtualization of large data sets. Through an abstraction of the level-of-detail working sets, our system allows developers to work with extremely large data sets independent of their complex internal data representations and physical memory layouts. Based on this out-of-core data virtualization infrastructure, we present distinct rendering approaches for specific visualization problems of large geo-scientific data sets. We demonstrate the application of our data virtualization system and show how multi-resolution data can be treated exactly the same way as regular data sets during the rendering process. An efficient volume ray casting system is presented for the rendering of multiple arbitrarily overlapping multi-resolution volume data sets. Binary space-partitioning volume decomposition of the bounding boxes of the cube-shaped volumes is used to identify the overlapping and non-overlapping volume regions in order to optimize the rendering process. We further propose a ray casting-based rendering system for the visualization of geological subsurface models consisting of multiple very detailed height fields. The rendering of an entire stack of height-field surfaces is accomplished in a single rendering pass using a two-level acceleration structure, which combines a minimum-maximum quadtree for empty-space skipping and sorted lists of depth intervals to restrict ray intersection searches to relevant height fields and depth ranges. Ultimately, we present a unified rendering system for the visualization of entire geological models consisting of highly detailed stacked horizon surfaces and massive volume data. We demonstrate a single-pass ray casting approach facilitating correct visual interaction between distinct translucent model components, while increasing the rendering efficiency by reducing processing overhead of potentially invisible parts of the model. The combination of image-order rendering approaches and the level-of-detail feedback mechanism used by our out-of-core data-management system inherently accounts for occlusions of different data types without the application of costly culling techniques. The unified out-of-core data-management and virtualization infrastructure considerably facilitates the implementation of complex visualization systems. We demonstrate its applicability for the visualization of large geo-scientific data sets using output-sensitive rendering techniques. As a result, the magnitude and multitude of data sets that can be interactively visualized is significantly increased compared to existing approaches

    An Architecture Approach for 3D Render Distribution using Mobile Devices in Real Time

    Get PDF
    Nowadays, video games such as Massively Multiplayer Online Game (MMOG) have become cultural mediators. Mobile games contribute to a large number of downloads and potential benefits in the applications market. Although processing power of mobile devices increases the bandwidth transmission, a poor network connectivity may bottleneck Gaming as a Service (GaaS). In order to enhance performance in digital ecosystem, processing tasks are distributed among thin client devices and robust servers. This research is based on the method ‘divide and rule’, that is, volumetric surfaces are subdivided using a tree-KD of sequence of scenes in a game, so reducing the surface into small sets of points. Reconstruction efficiency is improved, because the search of data is performed in local and small regions. Processes are modeled through a finite set of states that are built using Hidden Markov Models with domains configured by heuristics. Six test that control the states of each heuristic, including the number of intervals are carried out to validate the proposed model. This validation concludes that the proposed model optimizes response frames per second, in a sequence of interactions

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography SelbststĂ€ndigkeitserklĂ€run
    • 

    corecore