11 research outputs found

    Real-time subsurface scattering on the GPU

    Get PDF
    We present a GPU algorithm that computes subsurface light transport in real time on arbitrary animated meshes. We evaluate both single scattering and multiple scattering, by using piecewise linear and ring-based approximations of the surface in the fragment shader. We demonstrate our technique on animated meshes at 60 fps

    Rendu de matériaux semi-transparents hétérogènes en temps réel

    Full text link
    On retrouve dans la nature un nombre impressionnant de matériaux semi-transparents tels le marbre, le jade ou la peau, ainsi que plusieurs liquides comme le lait ou les jus. Que ce soit pour le domaine cinématographique ou le divertissement interactif, l'intérêt d'obtenir une image de synthèse de ce type de matériau demeure toujours très important. Bien que plusieurs méthodes arrivent à simuler la diffusion de la lumière de manière convaincante a l'intérieur de matériaux semi-transparents, peu d'entre elles y arrivent de manière interactive. Ce mémoire présente une nouvelle méthode de diffusion de la lumière à l'intérieur d'objets semi-transparents hétérogènes en temps réel. Le coeur de la méthode repose sur une discrétisation du modèle géométrique sous forme de voxels, ceux-ci étant utilisés comme simplification du domaine de diffusion. Notre technique repose sur la résolution de l'équation de diffusion à l'aide de méthodes itératives permettant d'obtenir une simulation rapide et efficace. Notre méthode se démarque principalement par son exécution complètement dynamique ne nécessitant aucun pré-calcul et permettant une déformation complète de la géométrie.We find in nature several semi-transparent materials such as marble, jade or skin, as well as liquids such as milk or juices. Whether it be for digital movies or video games, having an efficient method to render these materials is an important goal. Although a large body of previous academic work exists in this area, few of these works provide an interactive solution. This thesis presents a new method for simulating light scattering inside heterogeneous semi-transparent materials in real time. The core of our technique relies on a geometric mesh voxelization to simplify the diffusion domain. The diffusion process solves the diffusion equation in order to achieve a fast and efficient simulation. Our method differs mainly from previous approaches by its completely dynamic execution requiring no pre-computations and hence allowing complete deformations of the geometric mesh

    Towards Interactive Photorealistic Rendering

    Get PDF

    Interactive Rendering of Scattering and Refraction Effects in Heterogeneous Media

    Get PDF
    In this dissertation we investigate the problem of interactive and real-time visualization of single scattering, multiple scattering and refraction effects in heterogeneous volumes. Our proposed solutions span a variety of use scenarios: from a very fast yet physically-based approximation to a physically accurate simulation of microscopic light transmission. We add to the state of the art by introducing a novel precomputation and sampling strategy, a system for efficiently parallelizing the computation of different volumetric effects, and a new and fast version of the Discrete Ordinates Method. Finally, we also present a collateral work on real-time 3D acquisition devices

    Reconstruction and rendering of time-varying natural phenomena

    Get PDF
    While computer performance increases and computer generated images get ever more realistic, the need for modeling computer graphics content is becoming stronger. To achieve photo-realism detailed scenes have to be modeled often with a significant amount of manual labour. Interdisciplinary research combining the fields of Computer Graphics, Computer Vision and Scientific Computing has led to the development of (semi-)automatic modeling tools freeing the user of labour-intensive modeling tasks. The modeling of animated content is especially challenging. Realistic motion is necessary to convince the audience of computer games, movies with mixed reality content and augmented reality applications. The goal of this thesis is to investigate automated modeling techniques for time-varying natural phenomena. The results of the presented methods are animated, three-dimensional computer models of fire, smoke and fluid flows.Durch die steigende Rechenkapazität moderner Computer besteht die Möglichkeit immer realistischere Bilder virtuell zu erzeugen. Dadurch entsteht ein größerer Bedarf an Modellierungsarbeit um die nötigen Objekte virtuell zu beschreiben. Um photorealistische Bilder erzeugen zu können müssen sehr detaillierte Szenen, oft in mühsamer Handarbeit, modelliert werden. Ein interdisziplinärer Forschungszweig, der Computergrafik, Bildverarbeitung und Wissenschaftliches Rechnen verbindet, hat in den letzten Jahren die Entwicklung von (semi-)automatischen Methoden zur Modellierung von Computergrafikinhalten vorangetrieben. Die Modellierung dynamischer Inhalte ist dabei eine besonders anspruchsvolle Aufgabe, da realistische Bewegungsabläufe sehr wichtig für eine überzeugende Darstellung von Computergrafikinhalten in Filmen, Computerspielen oder Augmented-Reality Anwendungen sind. Das Ziel dieser Arbeit ist es automatische Modellierungsmethoden für dynamische Naturerscheinungen wie Wasserfluss, Feuer, Rauch und die Bewegung erhitzter Luft zu entwickeln. Das Resultat der entwickelten Methoden sind dabei dynamische, dreidimensionale Computergrafikmodelle

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Interactive Rendering of Translucent Deformable Objects

    No full text
    Realistic rendering of materials such as milk, fruits, wax, marble, and so on, requires the simulation of subsurface scattering of light. This paper presents an algorithm for plausible reproduction of subsurface scattering effects

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Sixth Biennial Report : August 2001 - May 2003

    No full text

    Eurographics Symposium on Rendering 2003 Per Christensen and Daniel Cohen-Or (Editors) Interactive Rendering of Translucent Deformable Objects

    No full text
    Realistic rendering of materials such as milk, fruits, wax, marble, and so on, requires the simulation of subsurface scattering of light. This paper presents an algorithm for plausible reproduction of subsurface scattering effects. Unlike previously proposed work, our algorithm allows to interactively change lighting, viewpoint, subsurface scattering properties, as well as object geometry. The key idea of our approach is to use a hierarchical boundary element method to solve the integral describing subsurface scattering when using a recently proposed analytical BSSRDF model. Our approach is inspired by hierarchical radiosity with clustering. The success of our approach is in part due to a semi-analytical integration method that allows to compute needed point-to-patch form-factor like transport coefficients efficiently and accurately where other methods fail. Our experiments show that high-quality renderings of translucent objects consisting of tens of thousands of polygons can be obtained from scratch in fractions of a second. An incremental update algorithm further speeds up rendering after material or geometry changes
    corecore