13 research outputs found

    Gigavoxels: ray-guided streaming for efficient and detailed voxel rendering

    Get PDF
    Figure 1: Images show volume data that consist of billions of voxels rendered with our dynamic sparse octree approach. Our algorithm achieves real-time to interactive rates on volumes exceeding the GPU memory capacities by far, tanks to an efficient streaming based on a ray-casting solution. Basically, the volume is only used at the resolution that is needed to produce the final image. Besides the gain in memory and speed, our rendering is inherently anti-aliased. We propose a new approach to efficiently render large volumetric data sets. The system achieves interactive to real-time rendering performance for several billion voxels. Our solution is based on an adaptive data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. One key element of our method is to guide data production and streaming directly based on information extracted during rendering. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. We also introduce a mipmapping-like process that allows for an increased display rate and better quality through high quality filtering. To further enrich the data set, we create additional details through a variety of procedural methods. We demonstrate our approach in several scenarios, like the exploration of a 3D scan (8192 3 resolution), of hypertextured meshes (16384 3 virtual resolution), or of a fractal (theoretically infinite resolution). All examples are rendered on current generation hardware at 20-90 fps and respect the limited GPU memory budget. This is the author’s version of the paper. The ultimate version has been published in the I3D 2009 conference proceedings.

    Synthesizing Verdant Landscape Using Volumetric Textures

    Get PDF
    International audienceIn this paper, we turn the representation into a real tool: we have dealt with animation in EWAS'95, we deal here with texel construction, mapping, color, etc. We apply these methods to synthesize complex natural scenes.Dans cet article, on s'attache à transformer la représentation en véritable outil: on avait déjà décrit l'animation dans EWAS'95, on traite ici de la construction des texels, de leur mapping, du traitement de la couleur, etc. On applique ces méthodes à la synthèse de scènes naturelles complexes

    Efficient evaluation of functionally represented volumetric objects.

    Get PDF
    There are several approaches to representing shapes in computer graphics. One of the ways to describe objects and operations is Function Representation (FRep). In FRep, a geometric object is de ned by a single continuous real-valued function of point coordinates. Generally geometric modelling is conducted in order to achieve visual outcome. In FRep the transformation of a function into a visual representation relies on extensive sampling of the function. The computational cost of the sampling can cause adverse e ects during applications runtime. In this thesis the problem of e cient evaluation of the de ning function is discussed. An observation is made on wide range of operations and primitives within FRep and their suitability for parallelization. Furthermore, a new novel method is proposed to distribute FReps computational workloads on parallel hardware devices such as graphics programming units and multi-core processors

    A General and Multiscale Model for Volumetric Textures

    Get PDF
    International audienceThis paper presents an important extension of the volumetric textures introduced by Kajiya in 1989. Volumetric textures are used to model complex geometries (such as foliage, fur, ...) on a textural way, by mapping a 'thick skin' made of a repetitive pattern (the texel) covering a simple surface. Like for the Kajiya's implementation, our model code the reference volume with voxels, which contain in addition to the density an illumination model. (Even if we are far from a waved iron-sheet, it don't look like a flat sheet: shapes can be approximated, but reflectance behavior are different.) The extension lais in a quite generic illumination coding, and more, in the ability of the model to be 'smoothed', so that the representation is similar to the mip-map 2D-textures coding. Thus, it is now possible to compute quite quickly and with low aliasing very complicated scene, the details being represented adaptatively.Cet article présente une extension importante des textures volumiques introduites par Kajiya en 1989. Les textures volumiques permettent de modéliser les géométries complexes (feuillage, fourrure...) à la manière d'une texture, comme une 'peau épaisse' constituée d'un motif répétitif volumique, le texel, recouvrant une surface simple. Comme dans l'implémentation de Kajiya, le modèle proposé code un volume de référence par des voxels, dans lesquel on stocke en plus de la densité un modèle d'illumination. (En effet, même à grande distance, une tôle ondulée n'est pas assimilable à une plaque: si on peu approximer la forme, le comportement en réflectance reste différent.) L'extension réside dans le fait d'utiliser un modèle relativement générique de codage de l'illumination, et surtout dans la capacité du modèle à être 'filtré', de telle sorte que la représentation obtenue soit l'équivalent volumique du codage mip-map des textures 2D. Ainsi, il devient possible de calculer relativement rapidement et avec assez peu d'aliasing des scènes très chargées en détails, la représentation de ceux-ci se faisant de manière adaptative

    Real-time fur modeling with simulation of physical effects

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2012.Thesis (Master's) -- Bilkent University, 2012.Includes bibliographical references leaves 51-54.Fur is one of the important visual aspects of animals and it is quite challenging to model it in computer graphics. This is due to rendering and animating high amounts of geometry taking excessive time in our personal computers. Thus in computer games most of the animals are without fur or covered with a single layer of texture. But these current methods do not provide the reality and even if the rendering in the game is realistic the fur is omitted. There have been several models to render a fur, but the methods that incorporate rendering are not in real-time, on the other hand most of the real-time methods omit many of the natural aspects , such as; texture lighting, shadow and animation. Thus the outcome is not sufficient for realistic gaming experience. In this thesis we propose a real-time fur represantation that can be used on 3D objects. Moreover, we demonstrate how to; render, animate and burn this real-time fur.Arıyürek, SinanM.S

    Enhancing Mesh Deformation Realism: Dynamic Mesostructure Detailing and Procedural Microstructure Synthesis

    Get PDF
    Propomos uma solução para gerar dados de mapas de relevo dinâmicos para simular deformações em superfícies macias, com foco na pele humana. A solução incorpora a simulação de rugas ao nível mesoestrutural e utiliza texturas procedurais para adicionar detalhes de microestrutura estáticos. Oferece flexibilidade além da pele humana, permitindo a geração de padrões que imitam deformações em outros materiais macios, como couro, durante a animação. As soluções existentes para simular rugas e pistas de deformação frequentemente dependem de hardware especializado, que é dispendioso e de difícil acesso. Além disso, depender exclusivamente de dados capturados limita a direção artística e dificulta a adaptação a mudanças. Em contraste, a solução proposta permite a síntese dinâmica de texturas que se adaptam às deformações subjacentes da malha de forma fisicamente plausível. Vários métodos foram explorados para sintetizar rugas diretamente na geometria, mas sofrem de limitações como auto-interseções e maiores requisitos de armazenamento. A intervenção manual de artistas na criação de mapas de rugas e mapas de tensão permite controle, mas pode ser limitada em deformações complexas ou onde maior realismo seja necessário. O nosso trabalho destaca o potencial dos métodos procedimentais para aprimorar a geração de padrões de deformação dinâmica, incluindo rugas, com maior controle criativo e sem depender de dados capturados. A incorporação de padrões procedimentais estáticos melhora o realismo, e a abordagem pode ser estendida além da pele para outros materiais macios.We propose a solution for generating dynamic heightmap data to simulate deformations for soft surfaces, with a focus on human skin. The solution incorporates mesostructure-level wrinkles and utilizes procedural textures to add static microstructure details. It offers flexibility beyond human skin, enabling the generation of patterns mimicking deformations in other soft materials, such as leater, during animation. Existing solutions for simulating wrinkles and deformation cues often rely on specialized hardware, which is costly and not easily accessible. Moreover, relying solely on captured data limits artistic direction and hinders adaptability to changes. In contrast, our proposed solution provides dynamic texture synthesis that adapts to underlying mesh deformations. Various methods have been explored to synthesize wrinkles directly to the geometry, but they suffer from limitations such as self-intersections and increased storage requirements. Manual intervention by artists using wrinkle maps and tension maps provides control but may be limited to the physics-based simulations. Our research presents the potential of procedural methods to enhance the generation of dynamic deformation patterns, including wrinkles, with greater creative control and without reliance on captured data. Incorporating static procedural patterns improves realism, and the approach can be extended to other soft-materials beyond skin

    Actas do 10º Encontro Português de Computação Gráfica

    Get PDF
    Actas do 10º Encontro Portugês de Computação Gráfica, Lisboa, 1-3 de Outubro de 2001A investigação, o desenvolvimento e o ensino na área da Computação Gráfica constituem, em Portugal, uma realidade positiva e de largas tradições. O Encontro Português de Computação Gráfica (EPCG), realizado no âmbito das actividades do Grupo Português de Computação Gráfica (GPCG), tem permitido reunir regularmente, desde o 1º EPCG realizado também em Lisboa, mas no já longínquo mês de Julho de 1988, todos os que trabalham nesta área abrangente e com inúmeras aplicações. Pela primeira vez no historial destes Encontros, o 10º EPCG foi organizado em ligação estreita com as comunidades do Processamento de Imagem e da Visão por Computador, através da Associação Portuguesa de Reconhecimento de Padrões (APRP), salientando-se, assim, a acrescida colaboração, e a convergência, entre essas duas áreas e a Computação Gráfica. Este é o livro de actas deste 10º EPCG.INSATUniWebIcep PortugalMicrografAutodes

    Hardware accelerated volume texturing.

    Get PDF
    The emergence of volume graphics, a sub field in computer graphics, has been evident for the last 15 years. Growing from scientific visualization problems, volume graphics has established itself as an important field in general computer graphics. However, the general graphics fraternity still favour the established surface graphics techniques. This is due to well founded and established techniques and a complete pipeline through software onto display hardware. This enables real-time applications to be constructed with ease and used by a wide range of end users due to the readily available graphics hardware adopted by many computer manufacturers. Volume graphics has traditionally been restricted to high-end systems due to the complexity involved with rendering volume datasets. Either specialised graphics hardware or powerful computers were required to generate images, many of these not in real-time. Although there have been specialised hardware solutions to the volume rendering problem, the adoption of the volume dataset as a primitive relies on end-users with commodity hardware being able to display images at interactive rates. The recent emergence of programmable consumer level graphics hardware is now allowing these platforms to compute volume rendering at interactive rates. Most of the work in this field is directed towards scientific visualisation. The work in this thesis addresses the issues in providing real-time volume graphics techniques to the general graphics community using commodity graphics hardware. Real-time texturing of volumetric data is explored as an important set of techniques in delivering volume datasets as a general graphics primitive. The main contributions of this work are; The introduction of efficient acceleration techniques; Interactive display of amorphous phenomena modelled outside an object defined in a volume dataset; Interactive procedural texture synthesis for volume data; 2D texturing techniques and extensions for volume data in real-time; A flexible surface detail mapping algorithm that removes many previous restrictions Parts of this work have been presented at the 4th International Workshop on Volume Graphics and also published in Volume Graphics 2005

    Hybrid modelling of heterogeneous volumetric objects.

    Get PDF
    Heterogeneous multi-material volumetric modelling is an emerging and rapidly developing field. A Heterogeneous object is a volumetric object with interior structure where different physically-based attributes are defined. The attributes can be of different nature: material distributions, density, microstructures, optical properties and others. Heterogeneous objects are widely used where the presence of the interior structures is an important part of the model. Computer-aided design (CAD), additive manufacturing, physical simulations, visual effects, medical visualisation and computer art are examples of such applications. In particular, digital fabrication employing multi-material 3D printing techniques is becoming omnipresent. However, the specific methods and tools for representation, modelling, rendering, animation and fabrication of multi-material volumetric objects with attributes are only starting to emerge. The need for adequate unifying theoretical and practical framework has been obvious. Developing adequate representational schemes for heterogeneous objects is in the core of research in this area. The most widely used representations for defining heterogeneous objects are boundary representation, distance-based representations, function representation and voxels. These representations work well for modelling homogeneous (solid) objects but they all have significant drawbacks when dealing with heterogeneous objects. In particular, boundary representation, while maintaining its prevailing role in computer graphics and geometric modelling, is not inherently natural for dealing with heterogeneous objects especially in the con- text of additive manufacturing and 3D printing, where multi-material properties are paramount as well as in physical simulation where the exact representation rather than an approximate one can be important. In this thesis, we introduce and systematically describe a theoretical and practical framework for modelling volumetric heterogeneous objects on the basis of a novel unifying functionally-based hybrid representation called HFRep. It is based on the function representation (FRep) and several distance-based representations, namely signed distance fields (SDFs), adaptively sampled distance fields (ADFs) and interior distance fields (IDFs). It embraces advantages and circumvents disadvantages of the initial representations. A mathematically substantiated theoretical description of the HFRep with an emphasis on defining functions for HFRep objects’ geometry and attributes is provided. This mathematical framework serves as the basis for developing efficient algorithms for the generation of HFRep objects taking into account both their geometry and attributes. To make the proposed approach practical, a detailed description of efficient algorithmic procedures has been developed. This has required employing a number of novel techniques of different nature, separately and in combination. In particular, an extension of a fast iterative method (FIM) for numerical solving of the eikonal equation on hierarchical grids was developed. This allowed for efficient computation of smooth distance-based attributes. To prove the concept, the main elements of the framework have been implemented and used in several applications of different nature. It was experimentally shown that the developed methods and tools can be used for generating objects with complex interior structure, e.g. microstructures, and different attributes. A special consideration has been devoted to applications of dynamic nature. A novel concept of heterogeneous space-time blending (HSTB) method with an automatic control for metamorphosis of heterogeneous objects with textures, both in 2D and 3D, has been introduced, algorithmised and implemented. We have applied the HSTB in the context of ‘4D Cubism’ project. There are plans to use the developed methods and tools for many other applications
    corecore