251 research outputs found

    Fast Volume Rendering and Deformation Algorithms

    Full text link
    Volume rendering is a technique for simultaneous visualization of surfaces and inner structures of objects. However, the huge number of volume primitives (voxels) in a volume, leads to high computational cost. In this dissertation I developed two algorithms for the acceleration of volume rendering and volume deformation. The first algorithm accelerates the ray casting of volume. Previous ray casting acceleration techniques like space-leaping and early-ray-termination are only efficient when most voxels in a volume are either opaque or transparent. When many voxels are semi-transparent, the rendering time will increase considerably. Our new algorithm improves the performance of ray casting of semi-transparently mapped volumes by exploiting the opacity coherency in object space, leading to a speedup factor between 1.90 and 3.49 in rendering semi-transparent volumes. The acceleration is realized with the help of pre-computed coherency distances. We developed an efficient algorithm to encode the coherency information, which requires less than 12 seconds for data sets with about 8 million voxels. The second algorithm is for volume deformation. Unlike the traditional methods, our method incorporates the two stages of volume deformation, i.e. deformation and rendering, into a unified process. Instead to deform each voxel to generate an intermediate deformed volume, the algorithm follows inversely deformed rays to generate the desired deformation. The calculations and memory for generating the intermediate volume are thus saved. The deformation continuity is achieved by adaptive ray division which matches the amplitude of local deformation. We proposed approaches for shading and opacit adjustment which guarantee the visual plausibility of deformation results. We achieve an additional deformation speedup factor of 2.34~6.58 by incorporating early-ray-termination, space-leaping and the coherency acceleration technique in the new deformation algorithm

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Meshless Mechanics and Point-Based Visualization Methods for Surgical Simulations

    Get PDF
    Computer-based modeling and simulation practices have become an integral part of the medical education field. For surgical simulation applications, realistic constitutive modeling of soft tissue is considered to be one of the most challenging aspects of the problem, because biomechanical soft-tissue models need to reflect the correct elastic response, have to be efficient in order to run at interactive simulation rates, and be able to support operations such as cuts and sutures. Mesh-based solutions, where the connections between the individual degrees of freedom (DoF) are defined explicitly, have been the traditional choice to approach these problems. However, when the problem under investigation contains a discontinuity that disrupts the connectivity between the DoFs, the underlying mesh structure has to be reconfigured in order to handle the newly introduced discontinuity correctly. This reconfiguration for mesh-based techniques is typically called dynamic remeshing, and most of the time it causes the performance bottleneck in the simulation. In this dissertation, the efficiency of point-based meshless methods is investigated for both constitutive modeling of elastic soft tissues and visualization of simulation objects, where arbitrary discontinuities/cuts are applied to the objects in the context of surgical simulation. The point-based deformable object modeling problem is examined in three functional aspects: modeling continuous elastic deformations with, handling discontinuities in, and visualizing a point-based object. Algorithmic and implementation details of the presented techniques are discussed in the dissertation. The presented point-based techniques are implemented as separate components and integrated into the open-source software framework SOFA. The presented meshless continuum mechanics model of elastic tissue were verified by comparing it to the Hertzian non-adhesive frictionless contact theory. Virtual experiments were setup with a point-based deformable block and a rigid indenter, and force-displacement curves obtained from the virtual experiments were compared to the theoretical solutions. The meshless mechanics model of soft tissue and the integrated novel discontinuity treatment technique discussed in this dissertation allows handling cuts of arbitrary shape. The implemented enrichment technique not only modifies the internal mechanics of the soft tissue model, but also updates the point-based visual representation in an efficient way preventing the use of costly dynamic remeshing operations

    Deep-Learning-Based 3-D Surface Reconstruction—A Survey

    Get PDF
    In the last decade, deep learning (DL) has significantly impacted industry and science. Initially largely motivated by computer vision tasks in 2-D imagery, the focus has shifted toward 3-D data analysis. In particular, 3-D surface reconstruction, i.e., reconstructing a 3-D shape from sparse input, is of great interest to a large variety of application fields. DL-based approaches show promising quantitative and qualitative surface reconstruction performance compared to traditional computer vision and geometric algorithms. This survey provides a comprehensive overview of these DL-based methods for 3-D surface reconstruction. To this end, we will first discuss input data modalities, such as volumetric data, point clouds, and RGB, single-view, multiview, and depth images, along with corresponding acquisition technologies and common benchmark datasets. For practical purposes, we also discuss evaluation metrics enabling us to judge the reconstructive performance of different methods. The main part of the document will introduce a methodological taxonomy ranging from point- and mesh-based techniques to volumetric and implicit neural approaches. Recent research trends, both methodological and for applications, are highlighted, pointing toward future developments

    Vertex classification for non-uniform geometry reduction.

    Get PDF
    Complex models created from isosurface extraction or CAD and highly accurate 3D models produced from high-resolution scanners are useful, for example, for medical simulation, Virtual Reality and entertainment. Often models in general require some sort of manual editing before they can be incorporated in a walkthrough, simulation, computer game or movie. The visualization challenges of a 3D editing tool may be regarded as similar to that of those of other applications that include an element of visualization such as Virtual Reality. However the rendering interaction requirements of each of these applications varies according to their purpose. For rendering photo-realistic images in movies computer farms can render uninterrupted for weeks, a 3D editing tool requires fast access to a model's fine data. In Virtual Reality rendering acceleration techniques such as level of detail can temporarily render parts of a scene with alternative lower complexity versions in order to meet a frame rate tolerable for the user. These alternative versions can be dynamic increments of complexity or static models that were uniformly simplified across the model by minimizing some cost function. Scanners typically have a fixed sampling rate for the entire model being scanned, and therefore may generate large amounts of data in areas not of much interest or that contribute little to the application at hand. It is therefore desirable to simplify such models non-uniformly. Features such as very high curvature areas or borders can be detected automatically and simplified differently to other areas without any interaction or visualization. However a problem arises when one wishes to manually select features of interest in the original model to preserve and create stand alone, non-uniformly reduced versions of large models, for example for medical simulation. To inspect and view such models the memory requirements of LoD representations can be prohibitive and prevent storage of a model in main memory. Furthermore, although asynchronous rendering of a base simplified model ensures a frame rate tolerable to the user whilst detail is paged, no guarantees can be made that what the user is selecting is at the original resolution of the model or of an appropriate LoD owing to disk lag or the complexity of a particular view selected by the user. This thesis presents an interactive method in the con text of a 3D editing application for feature selection from any model that fits in main memory. We present a new compression/decompression of triangle normals and colour technique which does not require dedicated hardware that allows for 87.4% memory reduction and allows larger models to fit in main memory with at most 1.3/2.5 degrees of error on triangle normals and to be viewed interactively. To address scale and available hardware resources, we reference a hierarchy of volumes of different sizes. The distances of the volumes at each level of the hierarchy to the intersection point of the line of sight with the model are calculated and these distances sorted. At startup an appropriate level of the tree is automatically chosen by separating the time required for rendering from that required for sorting and constraining the latter according to the resources available. A clustered navigation skin and depth buffer strategy allows for the interactive visualisation of models of any size, ensuring that triangles from the closest volumes are rendered over the navigation skin even when the clustered skin may be closer to the viewer than the original model. We show results with scanned models, CAD, textured models and an isosurface. This thesis addresses numerical issues arising from the optimisation of cost functions in LoD algorithms and presents a semi-automatic solution for selection of the threshold on the condition number of the matrix to be inverted for optimal placement of the new vertex created by an edge collapse. We show that the units in which a model is expressed may inadvertently affect the condition of these matrices, hence affecting the evaluation of different LoD methods with different solvers. We use the same solver with an automatically calibrated threshold to evaluate different uniform geometry reduction techniques. We then present a framework for non-uniform reduction of regular scanned models that can be used in conjunction with a variety of LoD algorithms. The benefits of non-uniform reduction are presented in the context of an animation system. (Abstract shortened by UMI.)

    Representation and coding of 3D video data

    Get PDF
    Livrable D4.1 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D4.1 du projet
    • …
    corecore