195 research outputs found

    Gigavoxels: ray-guided streaming for efficient and detailed voxel rendering

    Get PDF
    Figure 1: Images show volume data that consist of billions of voxels rendered with our dynamic sparse octree approach. Our algorithm achieves real-time to interactive rates on volumes exceeding the GPU memory capacities by far, tanks to an efficient streaming based on a ray-casting solution. Basically, the volume is only used at the resolution that is needed to produce the final image. Besides the gain in memory and speed, our rendering is inherently anti-aliased. We propose a new approach to efficiently render large volumetric data sets. The system achieves interactive to real-time rendering performance for several billion voxels. Our solution is based on an adaptive data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. One key element of our method is to guide data production and streaming directly based on information extracted during rendering. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. We also introduce a mipmapping-like process that allows for an increased display rate and better quality through high quality filtering. To further enrich the data set, we create additional details through a variety of procedural methods. We demonstrate our approach in several scenarios, like the exploration of a 3D scan (8192 3 resolution), of hypertextured meshes (16384 3 virtual resolution), or of a fractal (theoretically infinite resolution). All examples are rendered on current generation hardware at 20-90 fps and respect the limited GPU memory budget. This is the author’s version of the paper. The ultimate version has been published in the I3D 2009 conference proceedings.

    Transition Contour Synthesis with Dynamic Patch Transitions

    Get PDF
    In this article, we present a novel approach for modulating the shape of transitions between terrain materials to produce detailed and varied contours where blend resolution is limited. Whereas texture splatting and blend mapping add detail to transitions at the texel level, our approach addresses the broader shape of the transition by introducing intermittency and irregularity. Our results have proven that enriched detail of the blend contour can be achieved with a performance competitive to existing approaches without additional texture, geometry resources, or asset preprocessing. We achieve this by compositing blend masks on-the-fly with the subdivision of texture space into differently sized patches to produce irregular contours from minimal artistic input. Our approach is of particular importance for applications where GPU resources or artistic input is limited or impractical

    Shadow Mapping: Shadow Filtering in OpenGL

    Get PDF
    Obsah této bakalářské práce pojednává o generování a způsobech filtrování stínů v 3D aplikacích. Popisuje možnosti potlačení jevu diskretizace některými technikami a dokumentuje artefakty pro ně typické. Čtenář se v něm zároveň dozví o jejich výkonnostích a kvalitativních rozdílech, čímž bude schopen zhodnotit nabízené implementace.The content of this bachelor thesis discusses generation, and filtration ways of shadows in 3D applications. It describes possibilities to surpress the discretization phenomenon using certain techniques, and documents the artifacts typical for them. The reader will also learn about their performance, and quality differences, thereby being able to judge offered implementations.

    Linear Efficient Antialiased Displacement and Reflectance Mapping

    Get PDF
    International audienceWe present Linear Efficient Antialiased Displacement and Reflectance (LEADR) mapping, a reflectance filtering technique for displacement mapped surfaces. Similarly to LEAN mapping, it employs two mipmapped texture maps, which store the first two moments of the displacement gradients. During rendering, the projection of this data over a pixel is used to compute a noncentered anisotropic Beckmann distribution using only simple, linear filtering operations. The distribution is then injected in a new, physically based, rough surface microfacet BRDF model, that includes masking and shadowing effects for both diffuse and specular reflection under directional, point, and environment lighting. Furthermore, our method is compatible with animation and deformation, making it extremely general and flexible. Combined with an adaptive meshing scheme, LEADR mapping provides the very first seamless and hardware-accelerated multi-resolution representation for surfaces. In order to demonstrate its effectiveness, we render highly detailed production models in real time on a commodity GPU, with quality matching supersampled ground-truth images

    Real-time Terrain Rendering using Smooth Hardware Optimized Level of Detail

    Get PDF
    We present a method for real-time level of detail reduction that is able to display high-complexity polygonal surface data. A compact and efficient regular grid representation is used. The method is optimized for modern, low-end consumer 3D graphics cards. We avoid sudden changes of the geometry - also known as 'popping', when reducing the geometry by exploiting the low-level hardware programmability in order to maintain interactive framerates. Terrain models are repolygonized in order to minimizing the visible error. Furthermore, the method minimizes CPU usage during rendering and requires minimal pre-processing. We believe that this is the first time that a smooth level of detail has been implemented in commodity hardware

    Towards the optimization of resolution and rendering issues in the context of contemporary environmental design computer modelling

    Get PDF
    This dissertation sets out to find a pragmatic solution for optimizing both resolution and rendering issues in the context of contemporary environmental design computer modelling. In this regard the following issues are addressed: Firstly, determining (with reference to a limited selection of existing 3DS Max software and plugins (i.e. VRay and Mental-Ray), which specific piece of software produces the best compromise as far as visual accuracy is concerned whilst still offering the designer the best scope for further design manipulation. Secondly, establishing design techniques which can increase the speed of model making as well as reduce rendering time without having an adverse effect on issues such as resolution and image quality. Lastly, ascertaining the least number of surfaces for a typical geometrical shape (e.g. chair, table, ornament etc.) without losing visual veracity by manipulation of the design itself. The research strongly supports the notion that VRay is the best overall software to be employed as a base before applying any design solutions. In this latter regard a number of solutions became evident as a means to both save memory and cut down on rendering time, including such factors as using spotlight rather than an omni light when rendering, because omni light calculations include the generation of needless shadows. The beneficial effect of employing ‘target direct’ light and reducing the area of light in order to decrease the calculation of shadow. Eliminating objects which do not need shadows from the lighting calculation and shutting off the reverberation and refraction factors before rendering. It was also confirmed that a black and white mipmap is better than a colour mipmap as far as saving on the system’s memory

    Komparatiivinen arviointi kiiltävien pintojen valaistustuloksista mallintilan valaistuksen ja ruuduntilan valaistuksen välillä

    Get PDF
    The field of computer graphics places a premium on obtaining an optimal balance between the fidelity of visual of representation and the performance of rendering. The level of fidelity for traditional shading techniques that operate in screen-space is generally related to the screen resolution and thus the number of pixels that we render. Special application areas, such as stereo rendering for virtual reality head-mounted displays, demand high output update rates and screen pixel resolutions which can then lead to significant performance penalties. This means that it would be beneficial to utilize a rendering technique which could be decoupled from the output update rate and resolution, without too severely affecting the achieved rendering quality. One technique capable of meeting this goal is that of performing a 3D model's surface shading in an object-specific space. In this thesis we have implemented such a shading method, with the lighting computations over a model's surface being done on a model-specific, uniquely parameterized texture map we call a light map. As the shading is computed per light map texel, the costs do not depend on the output resolution or update rate. Additionally, we utilize the texture sampling hardware built into the Graphics Processing Units ubiquitous in modern computing systems to gain high quality anti-aliasing on the shading results. The end result is a surface appearance that is expected to theoretically be close to those resulting from highly supersampled screen-space shading techniques. In addition to the object-space lighting technique, we also implemented a traditional screen-space version of our shading algorithm. Both of these techniques were used in a user study we organized to test against the theoretical expectation. The results from the study indicated that the object-space shaded images are perceptually close to identical compared to heavily supersampled screen-space images.Tietokonegrafiikan alalla optimaalisen tasapainon saavuttaminen kuvanlaadun ja laskentanopeuden välillä on keskeisessä asemassa. Perinteisillä, kuvaruuduntilassa toimivilla valaistusalgoritmeilla kuvanlaatu on tyypillisesti riippuvainen käytetyn piirtoikkunan erottelutarkkuudesta ja näin ollen kuvaelementtien kokonaismäärästä. Tietyt sovellusalueet, kuten stereopiirtäminen keinotodellisuussovelluksille, edellyttävät korkeata ruudunpäivitystaajuutta sekä erottelutarkkuutta, mikä taas johtaa laskentatehovaatimusten kasvuun. Näin ollen on tarkoituksenmukaista hyödyntää algoritmeja, joissa valaistuslaskenta saataisiin erotettua näistä ominaisuuksista ilman merkittävää kuvanlaadun heikkenemistä. Yksi algoritmikategoria, joka täyttää nämä asetetut vaatimukset on valaistuslaskenta 3D-mallikohtaisessa tilassa. Tämän diplomityön puitteissa olemme toteuttaneet tähän kategoriaan lukeutuvan valaistusalgoritmin, jossa valaistuslaskenta suoritetaan mallikohtaisella, yksikäsitteisesti parametrisoidulla tekstuurikartalla. Tämä tarkoittaa, että valaistuslaskennasta koituvat suorituskykykustannukset eivät ole riippuvaisia aiemmin mainituista ruudun ominaisuuksista. Valaistuslaskenta yksilöllisiin tekstuurikarttoihin mahdollistaa näytönohjaimiin sisäänrakennetun teksturointilaitteiston käyttämisen korkealaatuiseen valaistustulosten suodattamiseen. Lopputuloksena saavutetaan piirretty kuva, jonka teoreettisesti oletetaan olevan laadultaan lähellä merkittävästi ylinäytteistettyä ruuduntilan valaistusalgoritmeille saavutettuja tuloksia. Mallikohtaisen tilan valaistusalgoritmin lisäksi toteutimme perinteisen ruuduntilan valaistusalgoritmiversion. Molempia toteutuksia käytettiin järjestämässämme käyttäjätestissä, jonka tavoitteena oli testata toteutuuko mainittu teoreettinen oletus käytännössä. Käyttäjätestin tulokset viittasivat vahvasti oletuksen pätevyyteen, käyttäjien kokonaisvaltaisesti kokien ylinäytteistetyn perinteisen valaistuslaskennan tulokset lähes identtisiksi mallintilan valaistuslaskennan tuloksiin

    Filtering Non-Linear Transfer Functions on Surfaces

    Get PDF
    International audienceApplying non-linear transfer functions and look-up tables to procedural functions (such as noise), surface attributes, or even surface geometry are common strategies used to enhance visual detail. Their simplicity and ability to mimic a wide range of realistic appearances have led to their adoption in many rendering problems. As with any textured or geometric detail, proper filtering is needed to reduce aliasing when viewed across a range of distances, but accurate and efficient transfer function filtering remains an open problem for several reasons: transfer functions are complex and non-linear, especially when mapped through procedural noise and/or geometry-dependent functions, and the effects of perspective and masking further complicate the filtering over a pixel's footprint. We accurately solve this problem by computing and sampling from specialized filtering distributions on the fly, yielding very fast performance. We investigate the case where the transfer function to filter is a color map applied to (macroscale) surface textures (like noise), as well as color maps applied according to (microscale) geometric details. We introduce a novel representation of a (potentially modulated) color map's distribution over pixel footprints using Gaussian statistics and, in the more complex case of high-resolution color mapped microsurface details, our filtering is view- and light-dependent, and capable of correctly handling masking and occlusion effects. Our approach can be generalized to filter other physical-based rendering quantities. We propose an application to shading with irradiance environment maps over large terrains. Our framework is also compatible with the case of transfer functions used to warp surface geometry, as long as the transformations can be represented with Gaussian statistics, leading to proper view- and light-dependent filtering results. Our results match ground truth and our solution is well suited to real-time applications, requires only a few lines of shader code (provided in supplemental material), is high performance, and has a negligible memory footprint

    Streaming narrow-band algorithm: interactive computation and visualization of level sets

    Get PDF
    Journal ArticleAbstract-Deformable isosurfaces, implemented with level-set methods, have demonstrated a great potential in visualization and computer graphics for applications such as segmentation, surface processing, and physically-based modeling. Their usefulness has been limited, however, by their high computational cost and reliance on significant parameter tuning. This paper presents a solution to these challenges by describing graphics processor (GPU) based algorithms for solving and visualizing level-set solutions at interactive rates. The proposed solution is based on a new, streaming implementation of the narrow-band algorithm. The new algorithm packs the level-set isosurface data into 2D texture memory via a multidimensional virtual memory system. As the level set moves, this texturebased representation is dynamically updated via a novel GPU-to-CPU message passing scheme. By integrating the level-set solver with a real-time volume renderer, a user can visualize and intuitively steer the level-set surface as it evolves. We demonstrate the capabilities of this technology for interactive volume segmentation and visualization
    corecore