21 research outputs found

    Meshless Mechanics and Point-Based Visualization Methods for Surgical Simulations

    Get PDF
    Computer-based modeling and simulation practices have become an integral part of the medical education field. For surgical simulation applications, realistic constitutive modeling of soft tissue is considered to be one of the most challenging aspects of the problem, because biomechanical soft-tissue models need to reflect the correct elastic response, have to be efficient in order to run at interactive simulation rates, and be able to support operations such as cuts and sutures. Mesh-based solutions, where the connections between the individual degrees of freedom (DoF) are defined explicitly, have been the traditional choice to approach these problems. However, when the problem under investigation contains a discontinuity that disrupts the connectivity between the DoFs, the underlying mesh structure has to be reconfigured in order to handle the newly introduced discontinuity correctly. This reconfiguration for mesh-based techniques is typically called dynamic remeshing, and most of the time it causes the performance bottleneck in the simulation. In this dissertation, the efficiency of point-based meshless methods is investigated for both constitutive modeling of elastic soft tissues and visualization of simulation objects, where arbitrary discontinuities/cuts are applied to the objects in the context of surgical simulation. The point-based deformable object modeling problem is examined in three functional aspects: modeling continuous elastic deformations with, handling discontinuities in, and visualizing a point-based object. Algorithmic and implementation details of the presented techniques are discussed in the dissertation. The presented point-based techniques are implemented as separate components and integrated into the open-source software framework SOFA. The presented meshless continuum mechanics model of elastic tissue were verified by comparing it to the Hertzian non-adhesive frictionless contact theory. Virtual experiments were setup with a point-based deformable block and a rigid indenter, and force-displacement curves obtained from the virtual experiments were compared to the theoretical solutions. The meshless mechanics model of soft tissue and the integrated novel discontinuity treatment technique discussed in this dissertation allows handling cuts of arbitrary shape. The implemented enrichment technique not only modifies the internal mechanics of the soft tissue model, but also updates the point-based visual representation in an efficient way preventing the use of costly dynamic remeshing operations

    Point Primitives Based Virtual Surgery System

    Get PDF
    In order to achieve a high degree of visual realism in surgery simulation, we present a virtual surgery system framework, which is based on point primitives for the virtual surgery scene rendering and the biomechanical calculation of the soft tissue. To embody the superiority of this framework, two virtual surgery systems based on point primitives we developed are exhibited in this paper. Six critical functional modules were selected as representative of basic and advanced virtual surgery skill. These modules were: 1) point-based texture mapping; 2) deformation simulation; 3) cutting simulation; 4) tearing simulation; 5) dynamic texture mapping; and 6) 3-D display. These modules were elaborated by including working principle, execution process, and the performance of the algorithm. The experimental results have shown that point primitives-based virtual surgery systems obtained higher performance in terms of computational efficiency and rendering effect than traditional meshes-based virtual surgery system

    An interactive meshless cutting model for nonlinear viscoelastic soft tissue in surgical simulators

    Get PDF
    In this paper, we present a novel interactive cutting simulation model for soft tissue based on the meshless framework. Unlike most existing methods that consider the cutting process of soft tissue in an over simplified manner, the presented model is able to simulate the complete cutting process that includes three stages: deformation before cutting open (DBCO), cutting open (CO), and deformation after cutting open (DACO). To characterize the complicated physical and mechanical properties of soft tissue, both nonlinearity and viscoelasticity were incorporated into the equations governing the motion of soft tissue. A line contact model was used for simulating the cutting process after analyzing the two major types of surgical instruments, i.e., scalpel and electrostick. The cutting speed and angle were taken into account in order to improve haptic rendering. Biomechanical tests and simulation experiments verified the validity of the introduced model. Specifically, the displacement vs. cutting force curves can be divided into three segments corresponding to the three stages of the cutting process. The results were also applied in a liver cutting simulating system and satisfactory visual effect and haptic feedback were achieved

    An efficient active B-spline/nurbs model for virtual sculpting

    Get PDF
    This thesis presents an Efficient Active B-Spline/Nurbs Model for Virtual Sculpting. In spite of the on-going rapid development of computer graphics and computer-aided design tools, 3D graphics designers still rely on non-intuitive modelling procedures for the creation and manipulation of freeform virtual content. The ’Virtual Sculpting' paradigm is a well-established mechanism for shielding designers from the complex mathematics that underpin freeform shape design. The premise is to emulate familiar elements of traditional clay sculpting within the virtual design environment. Purely geometric techniques can mimic some physical properties. More exact energy-based approaches struggle to do so at interactive rates. This thesis establishes a unified approach for the representation of physically aware, energy-based, deformable models, across the domains of Computer Graphics, Computer-Aided Design and Computer Vision, and formalises the theoretical relationships between them. A novel reformulation of the computer vision approach of Active Contour Models (ACMs) is proposed for the domain of Virtual Sculpting. The proposed ACM-based model offers novel interaction behaviours and captures a compromise between purely geometric and more exact energy-based approaches, facilitating physically plausible results at interactive rates. Predefined shape primitives provide features of interest, acting like sculpting tools such that the overall deformation of an Active Surface Model is analogous to traditional clay modelling. The thesis develops a custom-approach to provide full support for B-Splines, the de facto standard industry representation of freeform surfaces, which have not previously benefited from the seamless embodiment of a true Virtual Sculpting metaphor. A novel generalised computationally efficient mathematical framework for the energy minimisation of an Active B-Spline Surface is established. The resulting algorithm is shown to significantly reduce computation times and has broader applications across the domains of Computer-Aided Design, Computer Graphics, and Computer Vision. A prototype ’Virtual Sculpting’ environment encapsulating each of the outlined approaches is presented that demonstrates their effectiveness towards addressing the long-standing need for a computationally efficient and intuitive solution to the problem of interactive computer-based freeform shape design

    Development and Evaluation of Sensor Concepts for Ageless Aerospace Vehicles: Report 3 - Design of the Concept Demonstrator

    Get PDF
    This report provides an outline of the essential features of a Structural Health Monitoring Concept Demonstrator (CD) that will be constructed during the next eight months. It is emphasized that the design cannot be considered to be complete, and that design work will continue in parallel with construction and testing. A major advantage of the modular design is that small modules of the system can be developed, tested and modified before a commitment is made to full system development. The CD is expected to develop and evolve for a number of years after its initial construction. This first stage will, of necessity, be relatively simple and have limited capabilities. Later developments will improve all aspects of the functionality of the system, including sensing, processing, communications, intelligence and response. The report indicates the directions this later development will take

    Volumetric particle modeling

    Get PDF
    This dissertation presents a robust method of modeling objects and forces for computer animation. Within this method objects and forces are represented as particles. As in most modeling systems, the movement of objects is driven by physically based forces. The usage of particles, however, allows more artistically motivated behavior to be achieved and also allows the modeling of heterogeneous objects and objects in different state phases: solid, liquid or gas. By using invisible particles to propagate forces through the modeling environment complex behavior is achieved through the interaction of relatively simple components. In sum, 'macroscopic' behavior emerges from 'microscopic' modeling. We present a newly developed modeling framework expanding on related work. This framework allows objects and forces to be modeled using particle representations and provides the details on how objects are created, how they interact, and how they may be displayed. We present examples to demonstrate the viability and robustness of the developed method of modeling. They illustrate the breaking and fracturing of solids, the interaction of objects in different phase states, and the achievement of a reasonable balance between artistic and physically based behaviors

    Koehenkilöiden suorituskykymittaukset: kuvataajuuden kasvattaminen ja latenssin kompensointi käyttäen kuvapohjaista renderöintia

    Get PDF
    Traditionally in computer graphics complex 3D scenes are represented as a collection of more primitive geometric surfaces. The geometric representation is then rendered into a 2D raster image suitable for display devices. Image based rendering is an interesting addition to a geometry based rendering. Performance is constrained only by display resolution, and not by scene geometry complexity or shader complexity. When used together with a geometry based renderer, an image based renderer can extrapolate additional frames into an animation sequence based on geometrically rendered frames. Existing research into image based rendering methods is investigated in context of interactive computer graphics. Also an image based renderer is implemented to run on a modern GPU shader architecture. Finally, it’s used in a first person shooter game experiment to measure task performance when using frame rate upconversion. Image based rendering is found to be promising for frame rate upconversion as well as for latency compensation. An implementation of an image based renderer is found feasible on modern GPUs. The experiment results show considerable improvement in test subject hit rates when using frame rate upconversion with latency compensation.Perinteisesti tietokonegrafiikassa monimutkaiset kolmiulotteiset maisemat kuvaillaan yksinkertaisempien geometristen pintojen kokoelmana. Geometrisesta kuvauksesta renderöidään kaksiulotteinen näyttöille sopiva rasterikuva. Kuvapohjainen renderöinti on mielenkiintoinen lisäys geometriapohjaisen renderöinnin rinnalle. Suorituskyky ei riipu virtuaalimaiseman geometrisestä monimutkaisuudesta tai varjostustehosteiden raskaudesta, vaan ainoastaan näytön erottelukyvystä. Yhdessä geometriapohjaisen renderöinnin kanssa käytettynä kuvapohjainen renderöija voi ekstrapoloida uusia kuvia animaatiosekvenssiin vanhojen tavallisesti renderöitujen kuvien perusteella. Kuvapohjaista renderöintia tutkitaan vuorovaikutteisen tietokonegrafiikan näkökulmasta olemassa olevan kirjallisuuden pohjalta. Lisäksi toteutetaan kuvapohjainen renderöija nykyaikaisille grafiikkasuorittimille. Lopuksi toteutetaan käyttäjäkoe käyttäen kuvapohjaista renderöijaa kuvataajuuden kasvattamiseksi, jossa koehenkilöiden suorituskykyä mitataan ammuskelupelissä. Kuvapohjainen renderöinti todetaan lupaavaksi keinoksi kuvataajuuden kasvattamiseksi ja latenssin kompensointiin. Kuvapohjaisen renderöijan toteuttaminen nykyaikaiselle grafiikkasuorittimille todetaan mahdolliseksi. Käyttäjäkokeen tulokset osoittavat, että koehenkilöiden osumatarkkuus koheni merkittävästi kun käytettiin kuvataajuuden kasvattamista ja latenssin kompensointia

    Dynamic data structures and saliency-influenced rendering

    Get PDF
    With increasing heterogeneity of modern hardware, different requirements for 3d applications arise. Despite the fact that real-time rendering of photo-realistic images is possible using today’s graphics cards, still large computational effort is required. Furthermore, smart-phones or computers with older, less powerful graphics cards may not be able to reproduce these results. To retain interactive rendering, usually the detail of a scene is reduced, and so less data needs to be processed. This removal of data, however, may introduce errors, so called artifacts. These artifacts may be distracting for a human spectator when gazing at the display. Thus, the visual quality of the presented scene is reduced. This is counteracted by identifying features of an object that can be removed without introducing artifacts. Most methods utilize geometrical properties, such as distance or shape, to rate the quality of the performed reduction. This information used to generate so called Levels Of Detail (LODs), which are made available to the rendering system. This reduces the detail of an object using the precalculated LODs, e.g. when it is moved into the back of the scene. The appropriate LOD is selected using a metric, and it is replaced with the current displayed version. This exchange must be made smoothly, requiring both LOD-versions to be drawn simultaneously during a transition. Otherwise, this exchange will introduce discontinuities, which are easily discovered by a human spectator. After completion of the transition, only the newly introduced LOD-version is drawn and the previous overhead removed. These LOD-methods usually operate with discrete levels and exploit limitations of both the display and the spectator: the human. Humans are limited in their vision. This ranges from being unable to distinct colors at varying illumination scenarios to the limitation to focus only at one location at a time. Researchers have developed many applications to exploit these limitations to increase the quality of an applied compression. Some popular methods of vision-based compression are MPEG or JPEG. For example, a JPEG compression exploits the reduced sensitivity of humans regarding color and so encodes colors with a lower resolution. Also, other fields, such as auditive perception, allow the exploitation of human limitations. The MP3 compression, for example, reduces the quality of stored frequencies if other frequencies are masking it. For representation of perception various computer models exist. In our rendering scenario, a model is advantageous that cannot be influenced by a human spectator, such as the visual salience or saliency. Saliency is a notion from psycho-physics that determines how an object “pops out” of its surrounding. These outstanding objects (or features) are important for the human vision and are directly evaluated by our Human Visual System (HVS). Saliency combines multiple parts of the HVS and allows an identification of regions where humans are likely to look at. In applications, saliency-based methods have been used to control recursive or progressive rendering methods. Especially expensive display methods, such as pathtracing or global illumination calculations, benefit from a perceptual representation as recursions or calculations can be aborted if only small or unperceivable errors are expected to occur. Yet, saliency is commonly applied to 2d images, and an extension towards 3d objects has only partially been presented. Some issues need to be addressed to accomplish a complete transfer. In this work, we present a smart rendering system that not only utilizes a 3d visual salience model but also applies the reduction in detail directly during rendering. As opposed to normal LOD-methods, this detail reduction is not limited to a predefined set of levels, but rather a dynamic and continuous LOD is created. Furthermore, to apply this reduction in a human-oriented way, a universal function to compute saliency of a 3d object is presented. The definition of this function allows to precalculate and store object-related visual salience information. This stored data is then applicable in any illumination scenario and allows to identify regions of interest on the surface of a 3d object. Unlike preprocessed methods, which generate a view-independent LOD, this identification includes information of the scene as well. Thus, we are able to define a perception-based, view-specific LOD. Performance measures of a prototypical implementation on computers with modern graphic cards achieved interactive frame rates, and several tests have proven the validity of the reduction. The adaptation of an object is performed with a dynamic data structure, the TreeCut. It is designed to operate on hierarchical representations, which define a multi-resolution object. In such a hierarchy, the leaf nodes contain the highest detail while inner nodes are approximations of their respective subtree. As opposed to classical hierarchical rendering methods, a cut is stored and re-traversal of a tree during rendering is avoided. Due to the explicit cut representation, the TreeCut can be altered using only two core operations: refine and coarse. The refine-operation increases detail by replacing a node of the tree with its children while the coarse-operation removes the node along with its siblings and replaces them with their parent node. These operations do not rely on external information and can be performed in a local manner. These only require direct successor or predecessor information. Different strategies to evolve the TreeCut are presented, which adapt the representation using only information given by the current cut. These evaluate the cut by assigning either a priority or a target-level (or bucket) to each cut-node. The former is modelled as an optimization problem that increases the average priority of a cut while being restricted in some way, e.g. in size. The latter evolves the cut to match a certain distribution. This is applied in cases where a prioritization of nodes is not applicable. Both evaluation strategies operate with linear time complexity with respect to the size of the current TreeCut. The data layout is chosen to separate rendering data and hierarchy to enable multi-threaded evaluation and display. The object is adapted over multiple frames while the rendering is not interrupted by the used evaluation strategy. Therefore, we separate the representation of the hierarchy from the rendering data. Due to its design, this overhead imposed to the TreeCut data structure does not influence rendering performance, and a linear time complexity for rendering is retained. The TreeCut is not only limited to alter geometrical detail of an object. The TreeCut has successfully been applied to create a non-photo-realistic stippling display, which draws the object with equal sized points in varying density. In this case the bucket-based evaluation strategy is utilized, which determines the distribution of the cut based on local illumination information. As an alternative, an attention drawing mechanism is proposed, which applies the TreeCut evaluation strategies to define the display style of a notification icon. A combination of external priorities is used to derive the appropriate icon version. An application for this mechanism is a messaging system that accounts for the current user situation. When optimizing an object or scene, perceptual methods allow to account for or exploit human limitations. Therefore, visual salience approaches derive a saliency map, which encodes regions of interest in a 2d map. Rendering algorithms extract importance from such a map and adapt the rendering accordingly, e.g. abort a recursion when the current location is unsalient. The visual salience depends on multiple factors including the view and the illumination of the scene. We extend the existing definition of the 2d saliency and propose a universal function for 3d visual salience: the Bidirectional Saliency Weight Distribution Function (BSWDF). Instead of extracting the saliency from 2d image and approximate 3d information, we directly compute this information using the 3d data. We derive a list of equivalent features for the 3d scenario and add them to the BSWDF. As the BSWDF is universal, also 2d images are covered with the BSWDF, and the calculation of the important regions within images is possible. To extract the individual features that contribute to visual salience, capabilities of modern graphics card in combination with an accumulation method for rendering is utilized. Inspired from point-based rendering methods local features are summed up in a single surface element (surfel) and are compared with their surround to determine whether they “pop out”. These operations are performed with a shader-program that is executed on the Graphics Processing Unit (GPU) and has direct access to the 3d data. This increases processing speed because no transfer of the data is required. After computation, each of these object-specific features can be combined to derive a saliency map for this object. Surface specific information, e.g. color or curvature, can be preprocessed and stored onto disk. We define a sampling scheme to determine the views that need to be evaluated for each object. With these schemes, the features can be interpolated for any view that occurs during rendering, and the according surface data is reconstructed. These sampling schemes compose a set of images in form of a lookup table. This is similar to existing rendering techniques, which extract illumination information from a lookup. The size of the lookup table increases only with the number of samples or the image size used for creation as the images are of equal size. Thus, the quality of the saliency data is independent of the object’s geometrical complexity. The computation of a BSWDF can be performed either on a Central Processing Unit (CPU) or a GPU, and an implementation requires only a few instructions when using a shader program. If the surface features have been stored during a preprocess, a reprojection of the data is performed and combined with the current information of the object. Once the data is available, the computation of the saliency values is done using a specialized illumination model, and a priority for each primitive is extracted. If the GPU is used, the calculated data has to be transferred from the graphics card. We therefore use the “transform feedback” capabilities, which allow high transfer rates and preserve the order of processed primitives. So, an identification of regions of interest based on the currently used primitives is achieved. The TreeCut evaluation strategies are then able to optimize the representation in an perception-based manner. As the adaptation utilizes information of the current scene, each change to an object can result in new visual salience information. So, a self-optimizing system is defined: the Feedback System. The output generated by this system converges towards a perception-optimized solution. To proof the saliency information to be useful, user tests have been performed with the results generated by the proposed Feedback System. We compared a saliency-enhanced object compression to a pure geometrical approach, common for LOD-generation. One result of the tests is that saliency information allows to increase compression even further as possible with the pure geometrical methods. The participants were not able to distinguish between objects even if the saliency-based compression had only 60% of the size of the geometrical reduced object. If the size ratio is greater, saliency-based compression is rated, on average, with higher score and these results have a high significance using statistical tests. The Feedback System extends an 3d object with the capability of self-optimization. Not only geometrical detail but also other properties can be limited and optimized using the TreeCut in combination with a BSWDF. We present a dynamic animation, which utilizes a Software Development Kit (SDK) for physical simulations. This was chosen, on the one hand, to show the universal applicability of the proposed system, and on the other hand, to focus on the connection between the TreeCut and the SDK. We adapt the existing framework, and include the SDK within our design. In this case, the TreeCut-operations not only alter geometrical but also simulation detail. This increases calculation performance because both the rendering and the SDK operate on less data after the reduction has been completed. The selected simulation type is a soft-body simulation. Soft-bodies are deformable in a certain degree but retain their internal connection. An example is a piece of cloth that smoothly fits the underlying surface without tearing apart. Other types are rigid bodies, i.e. idealistic objects that cannot be deformed, and fluids or gaseous materials, which are well suited for point-based simulations. Any of these simulations scales with the number of simulation nodes used, and a reduction of detail increases performance significantly. We define a specialized BSWDF to evaluate simulation specific features, such as motion. The Feedback System then increases detail in highly salient regions, e.g. those with large motion, and saves computation time by reducing detail in static parts of the simulation. So, detail of the simulation is preserved while less nodes are simulated. The incorporation of perception in real-time rendering is an important part of recent research. Today, the HVS is well understood, and valid computer models have been derived. These models are frequently used in commercial and free software, e.g. JPEG compression. Within this thesis, the Tree-Cut is presented to change the LOD of an object in a dynamic and continuous manner. No definition of the individual levels in advance is required, and the transitions are performed locally. Furthermore, in combination with an identification of important regions by the BSWDF, a perceptual evaluation of a 3d object is achieved. As opposed to existing methods, which approximate data from 2d images, the perceptual information is directly acquired from 3d data. Some of this data can be preprocessed if necessary, to defer additional computations during rendering. The Feedback System, created by the TreeCut and the BSWDF, optimizes the representation and is not limited to visual data alone. We have shown with our prototype that interactive frame rates can be achieved with modern hardware, and we have proven the validity of the reductions by performing several user tests. However, the presented system only focuses on specific aspects, and more research is required to capture even more capabilities that a perception-based rendering system can provide

    Design and development of a VR system for exploration of medical data using haptic rendering and high quality visualization

    Get PDF
    [no abstract

    Inference-based Geometric Modeling for the Generation of Complex Cluttered Virtual Environments

    Get PDF
    As the use of simulation increases across many diff erent application domains, the need for high- fidelity three-dimensional virtual representations of real-world environments has never been greater. This need has driven the research and development of both faster and easier methodologies for creating such representations. In this research, we present two diff erent inference-based geometric modeling techniques that support the automatic construction of complex cluttered environments. The fi rst method we present is a surface reconstruction-based approach that is capable of reconstructing solid models from a point cloud capture of a cluttered environment. Our algorithm is capable of identifying objects of interest amongst a cluttered scene, and reconstructing complete representations of these objects even in the presence of occluded surfaces. This approach incorporates a predictive modeling framework that uses a set of user provided models for prior knowledge, and applies this knowledge to the iterative identifi cation and construction process. Our approach uses a local to global construction process guided by rules for fi tting high quality surface patches obtained from these prior models. We demonstrate the application of this algorithm on several synthetic and real-world datasets containing heavy clutter and occlusion. The second method we present is a generative modeling-based approach that can construct a wide variety of diverse models based on user provided templates. This technique leverages an inference-based construction algorithm for developing solid models from these template objects. This algorithm samples and extracts surface patches from the input models, and develops a Petri net structure that is used by our algorithm for properly fitting these patches in a consistent fashion. Our approach uses this generated structure, along with a defi ned parameterization (either user-defi ned through a simple sketch-based interface or algorithmically de fined through various methods), to automatically construct objects of varying sizes and con figurations. These variations can include arbitrary articulation, and repetition and interchanging of parts sampled from the input models. Finally, we affim our motivation by showing an application of these two approaches. We demonstrate how the constructed environments can be easily used within a physically-based simulation, capable of supporting many diff erent application domains
    corecore