23 research outputs found

    Synthesizing Verdant Landscapes using Volumetric Textures

    Get PDF
    Projet SYNTIMVolumetric textures are able to represent complex repetitive data such as foliage, fur and forests by storing one sample of geometry in a volumetric {\em texel} to be mapped onto a surface. This volume consists in samples of densities and reflectances stored in voxels. The texel can be prefiltered similarly to the mip-mapping algorithm, giving an efficient rendering in ray-tracing with low aliasing, using a single ray per pixel. Our general purpose is to extend the volumetric texture method in order to provide a convenient and efficient tool for modeling, animating and rendering highly complex scenes in ray-tracing. We illustrate our method with verdant landscapes such as forests and lawns. In our previous work, we have dealt with the multiscale volume representation and texel animation. In this paper, we show how to convert usual 3D models into texels, and how to render texels mapped onto any mesh type. Solving these two issues makes the method usable for a designer

    Synthesizing Verdant Landscapes using Volumetric Textures

    Full text link

    Synthesizing Verdant Landscape Using Volumetric Textures

    Get PDF
    International audienceIn this paper, we turn the representation into a real tool: we have dealt with animation in EWAS'95, we deal here with texel construction, mapping, color, etc. We apply these methods to synthesize complex natural scenes.Dans cet article, on s'attache à transformer la représentation en véritable outil: on avait déjà décrit l'animation dans EWAS'95, on traite ici de la construction des texels, de leur mapping, du traitement de la couleur, etc. On applique ces méthodes à la synthèse de scènes naturelles complexes

    Interactive Vegetation Rendering with Slicing and Blending

    Get PDF
    Detailed and interactive 3D rendering of vegetation is one of the challenges of traditional polygon-oriented computer graphics, due to large geometric complexity even of simple plants. In this paper we introduce a simplified image-based rendering approach based solely on alpha-blended textured polygons. The simplification is based on the limitations of human perception of complex geometry. Our approach renders dozens of detailed trees in real-time with off-the-shelf hardware, while providing significantly improved image quality over existing real-time techniques. The method is based on using ordinary mesh-based rendering for the solid parts of a tree, its trunk and limbs. The sparse parts of a tree, its twigs and leaves, are instead represented with a set of slices, an image-based representation. A slice is a planar layer, represented with an ordinary alpha or color-keyed texture; a set of parallel slices is a slicing. Rendering from an arbitrary viewpoint in a 360 degree circle around the center of a tree is achieved by blending between the nearest two slicings. In our implementation, only 6 slicings with 5 slices each are sufficient to visualize a tree for a moving or stationary observer with the perceptually similar quality as the original model

    Procedural Generation of 3D Caves for Games on the GPU

    Get PDF
    Procedural Content Generation in Games (PCG) is a thriv- ing field of research and application. Recent presented ex- amples range from levels, stories and race tracks to complete rulesets for games. However, there is not much research to date on procedural 3D modeling of caves, and similar en- closed natural spaces. In this paper, we present a modular pipeline to procedurally generate underground caves in real- time, to be used as part of larger landscapes in game worlds. We propose a three step approach, which can be fully im- plemented using General-Purpose Computing on Graphics Processing (GPGPU) technology: 1) an L-System to em- ulate the expanded cracks and passages which form cave structures in nature, 2) a noise-perturbed metaball approach for virtual 3D carving, and 3) a rendering component for isosurface extraction of the modeled voxel data, and fur- ther mesh enhancement through shader programming. We demonstrate how the interaction between these components produce results comparable to real world caves, and show that the solution is viable for video game environments. For this, we present the findings of a user study we conducted among indie-game developers and players, using our results

    Real-time fur modeling with simulation of physical effects

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2012.Thesis (Master's) -- Bilkent University, 2012.Includes bibliographical references leaves 51-54.Fur is one of the important visual aspects of animals and it is quite challenging to model it in computer graphics. This is due to rendering and animating high amounts of geometry taking excessive time in our personal computers. Thus in computer games most of the animals are without fur or covered with a single layer of texture. But these current methods do not provide the reality and even if the rendering in the game is realistic the fur is omitted. There have been several models to render a fur, but the methods that incorporate rendering are not in real-time, on the other hand most of the real-time methods omit many of the natural aspects , such as; texture lighting, shadow and animation. Thus the outcome is not sufficient for realistic gaming experience. In this thesis we propose a real-time fur represantation that can be used on 3D objects. Moreover, we demonstrate how to; render, animate and burn this real-time fur.Arıyürek, SinanM.S

    Stereological techniques for synthesizing solid textures from images of aggregate materials

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2005.Includes bibliographical references (leaves 121-130).When creating photorealistic digital scenes, textures are commonly used to depict complex variation in surface appearance. For materials that have spatial variation in three dimensions, such as wood or marble, solid textures offer a natural representation. Unlike 2D textures, which can be easily captured with a photograph, it can be difficult to obtain a 3D material volume. This thesis addresses the challenge of extrapolating tileable 3D solid textures from images of aggregate materials, such as concrete, asphalt, terrazzo or granite. The approach introduced here is inspired by and builds on prior work in stereology--the study of 3D properties of a material based on 2D observations. Unlike ad hoc methods for texture synthesis, this approach has rigorous mathematical foundations that allow for reliable, accurate material synthesis with well-defined assumptions. The algorithm is also driven by psychophysical constraints to insure that slices through the synthesized volume have a perceptually similar appearance to the input image. The texture synthesis algorithm uses a variety of techniques to independently solve for the shape, distribution, and color of the embedded particles, as well as the residual noise. To approximate particle shape, I consider four methods-including two algorithms of my own contribution. I compare these methods under a variety of input conditions using automated, perceptually-motivated metrics as well as a carefully controlled psychophysical experiment. In addition to assessing the relative performance of the four algorithms, I also evaluate the reliability of the automated metrics in predicting the results of the user study. To solve for the particle distribution, I apply traditional stereological methods.(cont.) I first illustrate this approach for aggregate materials of spherical particles and then extend the technique to apply to particles of arbitrary shapes. The particle shape and distribution are used in conjunction to create an explicit 3D material volume using simulated annealing. Particle colors are assigned using a stochastic method, and high-frequency noise is replicated with the assistance of existing algorithms. The data representation is suitable for high-fidelity rendering and physical simulation. I demonstrate the effectiveness of the approach with side-by-side comparisons of real materials and their synthetic counterparts derived from the application of these techniques.by Robert Carl Jagnow.Ph.D

    Real-time Realistic Rendering Of Nature Scenes With Dynamic Lighting

    Get PDF
    Rendering of natural scenes has interested the scientific community for a long time due to its numerous applications. The targeted goal is to create images that are similar to what a viewer can see in real life with his/her eyes. The main obstacle is complexity: nature scenes from real life contain a huge number of small details that are hard to model, take a lot of time to render and require a huge amount of memory unavailable in current computers. This complexity mainly comes from geometry and lighting. The goal of our research is to overcome this complexity and to achieve real-time rendering of nature scenes while providing visually convincing dynamic global illumination. Our work focuses on grass and trees as they are commonly visible in everyday life. We handle geometry and lighting complexities for grass to render millions of grass blades interactively with dynamic lighting. As for lighting complexity, we address real-time rendering of trees by proposing a lighting model that handles indirect lighting. Our work makes extensive use of the current generation of Graphics Processing Units (GPUs) to meet the real-time requirement and to leave the CPU free to carry out other tasks
    corecore