7 research outputs found

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes

    Visual Data Representation using Context-Aware Samples

    Get PDF
    The rapid growth in the complexity of geometry models has necessisated revision of several conventional techniques in computer graphics. At the heart of this trend is the representation of geometry with locally constant approximations using independent sample primitives. This generally leads to a higher sampling rate and thus a high cost of representation, transmission, and rendering. We advocate an alternate approach involving context-aware samples that capture the local variation of the geometry. We detail two approaches; one, based on differential geometry and the other based on statistics. Our differential-geometry-based approach captures the context of the local geometry using an estimation of the local Taylor's series expansion. We render such samples using programmable Graphics Processing Unit (GPU) by fast approximation of the geometry in the screen space. The benefits of this representation can also be seen in other applications such as simulation of light transport. In our statistics-based approach we capture the context of the local geometry using Principal Component Analysis (PCA). This allows us to achieve hierarchical detail by modeling the geometry in a non-deterministic fashion as a hierarchical probability distribution. We approximate the geometry and its attributes using quasi-random sampling. Our results show a significant rendering speedup and savings in the geometric bandwidth when compared to current approaches

    Saliency-guided Graphics and Visualization

    Get PDF
    In this dissertation, we show how we can use principles of saliency to enhance depiction, manage visual attention, and increase interactivity for 3D graphics and visualization. Current mesh saliency approaches are inspired by low-level human visual cues, but have not yet been validated. Our eye-tracking-based user study shows that the current computational model of mesh saliency can well approximate human eye movements. Artists, illustrators, photographers, and cinematographers have long used the principles of contrast and composition to guide visual attention. We present a visual-saliency-based operator to draw visual attention to selected regions of interest. We have observed that it is more successful at eliciting viewer attention than the traditional Gaussian enhancement operator for visualizing both volume datasets and 3D meshes. Mesh saliency can be measured in various ways. The previous model of saliency computes saliency by identifying the uniqueness of curvature. Another way to identify uniqueness is to look for non-repeating structure in the middle of repeating structure. We have developed a system to detect repeating patterns in 3D point datasets. We introduce the idea of creating vertex and transformation streams that represent large point datasets via their interaction. This dramatically improves arithmetic intensity and addresses the input geometry bandwidth bottleneck for interactive 3D graphics applications. Fast-previewing of time-varing datasets is important for the purpose of summarization and abstraction. We compute the salient frames in molecular dynamics simulations through the subspace analysis of the protein's residue orientations. We first compute an affinity matrix for each frame i of the simulation based on the similarity of the orientation of the protein's backbone residues. Eigenanalysis of the affinity matrix gives us the subspace that best represents the conformation of the current frame i. We use this subspace to represent the frames ahead and behind frame i. The more accurately we can use the subspace of frame i to represent its neighbors, the less salient it is. Taken together, the tools and techniques developed in this dissertation are likely to provide the building blocks for the next generation visual analysis, reasoning, and discovery environments

    Génération et édition de textures géométriques représentées par des ensembles de points

    Full text link
    Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal

    Point based graphics rendering with unified scalability solutions.

    Get PDF
    Standard real-time 3D graphics rendering algorithms use brute force polygon rendering, with complexity linear in the number of polygons and little regard for limiting processing to data that contributes to the image. Modern hardware can now render smaller scenes to pixel levels of detail, relaxing surface connectivity requirements. Sub-linear scalability optimizations are typically self-contained, requiring specific data structures, without shared functions and data. A new point based rendering algorithm 'Canopy' is investigated that combines multiple typically sub-linear scalability solutions, using a small core of data structures. Specifically, locale management, hierarchical view volume culling, backface culling, occlusion culling, level of detail and depth ordering are addressed. To demonstrate versatility further, shadows and collision detection are examined. Polygon models are voxelized with interpolated attributes to provide points. A scene tree is constructed, based on a BSP tree of points, with compressed attributes. The scene tree is embedded in a compressed, partitioned, procedurally based scene graph architecture that mimics conventional systems with groups, instancing, inlines and basic read on demand rendering from backing store. Hierarchical scene tree refinement constructs an image tree image space equivalent, with object space scene node points projected, forming image node equivalents. An image graph of image nodes is maintained, describing image and object space occlusion relationships, hierarchically refined with front to back ordering to a specified threshold whilst occlusion culling with occluder fusion. Visible nodes at medium levels of detail are refined further to rasterization scales. Occlusion culling defines a set of visible nodes that can support caching for temporal coherence. Occlusion culling is approximate, possibly not suiting critical applications. Qualities and performance are tested against standard rendering. Although the algorithm has a 0(f) upper bound in the scene sizef, it is shown to practically scale sub-linearly. Scenes with several hundred billion polygons conventionally, are rendered at interactive frame rates with minimal graphics hardware support

    In Proceedings IASTED Computer Graphics and Imaging, pages –, 2003. Efficient Level-of-Details for Point Based Rendering

    No full text
    In this paper we present techniques for the efficient generation of a level-of-detail (LOD) data structure for large scale point-based surface representation and rendering. Our approach generates a spatial partitioning hierarchy of irregular point samples in 3D space, and we provide an efficient point-octree LOD generation algorithm. Using the concept of transformation-invariant homogeneous covariance matrices we show how bounding ellipsoid information can efficiently be computed for all LODs. Furthermore, we present an efficient data structure for the representation of the LOD hierarchy
    corecore