8 research outputs found

    Fluids real-time rendering

    Get PDF
    In this thesis the existing methods for realistic visualization of uids in real-time are reviewed. The correct handling of the interaction of light with a uid surface can highly increase the realism of the rendering, therefore method for physically accurate rendering of re ections and refractions will be used. The light- uid interaction does not stop at the surface, but continues inside the uid volume, causing caustics and beams of light. The simulation of uids require extremely time-consuming processes to achieve physical accuracy and will not be explored, although the main concepts will be given. Therefore, the main goals of this work are: Study and review the existing methods for rendering uids in realtime. Find a simpli ed physical model of light interaction, because a complete physically correct model would not achieve real-time. Develop an application that uses the found methods and the light interaction model

    Fluids real-time rendering

    Get PDF
    In this thesis the existing methods for realistic visualization of uids in real-time are reviewed. The correct handling of the interaction of light with a uid surface can highly increase the realism of the rendering, therefore method for physically accurate rendering of re ections and refractions will be used. The light- uid interaction does not stop at the surface, but continues inside the uid volume, causing caustics and beams of light. The simulation of uids require extremely time-consuming processes to achieve physical accuracy and will not be explored, although the main concepts will be given. Therefore, the main goals of this work are: Study and review the existing methods for rendering uids in realtime. Find a simpli ed physical model of light interaction, because a complete physically correct model would not achieve real-time. Develop an application that uses the found methods and the light interaction model

    Reconstruction and rendering of time-varying natural phenomena

    Get PDF
    While computer performance increases and computer generated images get ever more realistic, the need for modeling computer graphics content is becoming stronger. To achieve photo-realism detailed scenes have to be modeled often with a significant amount of manual labour. Interdisciplinary research combining the fields of Computer Graphics, Computer Vision and Scientific Computing has led to the development of (semi-)automatic modeling tools freeing the user of labour-intensive modeling tasks. The modeling of animated content is especially challenging. Realistic motion is necessary to convince the audience of computer games, movies with mixed reality content and augmented reality applications. The goal of this thesis is to investigate automated modeling techniques for time-varying natural phenomena. The results of the presented methods are animated, three-dimensional computer models of fire, smoke and fluid flows.Durch die steigende Rechenkapazität moderner Computer besteht die Möglichkeit immer realistischere Bilder virtuell zu erzeugen. Dadurch entsteht ein größerer Bedarf an Modellierungsarbeit um die nötigen Objekte virtuell zu beschreiben. Um photorealistische Bilder erzeugen zu können müssen sehr detaillierte Szenen, oft in mühsamer Handarbeit, modelliert werden. Ein interdisziplinärer Forschungszweig, der Computergrafik, Bildverarbeitung und Wissenschaftliches Rechnen verbindet, hat in den letzten Jahren die Entwicklung von (semi-)automatischen Methoden zur Modellierung von Computergrafikinhalten vorangetrieben. Die Modellierung dynamischer Inhalte ist dabei eine besonders anspruchsvolle Aufgabe, da realistische Bewegungsabläufe sehr wichtig für eine überzeugende Darstellung von Computergrafikinhalten in Filmen, Computerspielen oder Augmented-Reality Anwendungen sind. Das Ziel dieser Arbeit ist es automatische Modellierungsmethoden für dynamische Naturerscheinungen wie Wasserfluss, Feuer, Rauch und die Bewegung erhitzter Luft zu entwickeln. Das Resultat der entwickelten Methoden sind dabei dynamische, dreidimensionale Computergrafikmodelle

    A system for modelling deformable procedural shapes.

    Get PDF
    This thesis presents a new procedural paradigm for modelling. The method combines the benefit of compact object descriptions found in procedural modelling along with the advantage of the ability to interact in real-time as is found with interactive modelling techniques. The three main components to this paradigm are geometry generators (the creation of basic object shapes), selectors (the specification of a selection volume), and modifiers (the object transformation functions). The user interacts in real-time with the object, and has complete control over the object formation process. Interaction is stored within appropriate nodes in a creation-history list which can be replayed or partially replayed at any time during the creation process. The parameters associated with each interaction are stored within the node, and are available for editing at any time during the creation process. The concepts presented here remove the problems that most modelling software have, in that the arbitrary editing of object parameters is destructive, in the sense that changing the parameter of one node may cause the object to behave unpredictably. This takes place in real-time, rather than off-line. In some cases real-time interaction is made possible by trading visual quality vs. speed of rendering. This results in the object being rendered at a lower quality, and therefore decisions on whether the object parameters need adjustment may be predicated upon a poor representation of the object. The work presented herein attempts to bridge the divide between the two approaches by providing the user with a powerful and descriptive procedural modelling language that is entirely generated through real-time interaction with the geometric object via an intuitive user interface. The main contributions of this work are that it allows: Procedural objects are specified interactively. Modelling takes place independently of representation (meaning the user does not base their modelling on the (mesh) representation, but rather on the shape they see). Changes to the object are coherent and non-destructive

    Physikalisch-basierte Simulation von FlĂĽssigkeiten in interaktiven Umgebungen

    Get PDF
    Die realistische, physikalisch-basierte Repräsentation von Flüssigkeiten in der Computergraphik umfasst drei Schritte: Simulation, Oberflächenextraktion und Darstellung. Die Erzeugung realistischer Animationen mit Hilfe dieser Schritte ist rechentechnisch kostenintensiv und i.d.R. nicht in Echtzeit durchführbar. Für plausible Ergebnisse in Echtzeitumgebungen muss ein Kompromiss zwischen Qualität und Performanz eingegangen werden. Das Ziel dieser Dissertation ist die Beschleunigung der einzelnen Schritte, um den Detailgrad virtueller Flüssigkeiten in Echtzeitumgebungen zu vergrößern.Realistic physically-based visual representations of liquids require three main steps to be conducted: simulation, surface extraction and rendering. Since each of these steps in itself is a computationally complex task, realistic representations of liquids is a time-consuming endeavor. As a result, approaches that aim for real-time rendering and interactivity usually have to trade off quality for rendering speed. The goal of this thesis is to accelerate the representations of realistic liquids to achieve real-time frame rates. This is accomplished by devising novel dedicated acceleration techniques for each of the above mentioned steps

    Modeling and real-time rendering of participating media using the GPU

    Get PDF
    Cette thèse traite de la modélisation, l'illumination et le rendu temps-réel de milieux participants à l'aide du GPU. Dans une première partie, nous commençons par développer une méthode de rendu de nappes de brouillard hétérogènes pour des scènes en extérieur. Le brouillard est modélisé horizontalement dans une base 2D de fonctions de Haar ou de fonctions B-Spline linéaires ou quadratiques, dont les coefficients peuvent être chargés depuis une textit{fogmap}, soit une carte de densité en niveaux de gris. Afin de donner au brouillard son épaisseur verticale, celui-ci est doté d'un coefficient d'atténuation en fonction de l'altitude, utilisé pour paramétrer la rapidité avec laquelle la densité diminue avec la distance au milieu selon l'axe Y. Afin de préparer le rendu temps-réel, nous appliquons une transformée en ondelettes sur la carte de densité du brouillard, afin d'en extraire une approximation grossière (base de fonctions B-Spline) et une série de couches de détails (bases d'ondelettes B-Spline), classés par fréquence.%Les détails sont ainsi classés selon leur fréquence et, additionnées, permettent de retrouver la carte de densité d'origine. Chacune de ces bases de fonctions 2D s'apparente à une grille de coefficients. Lors du rendu sur GPU, chacune de ces grilles est traversée pas à pas, case par case, depuis l'observateur jusqu'à la plus proche surface solide. Grâce à notre séparation des différentes fréquences de détails lors des pré-calculs, nous pouvons optimiser le rendu en ne visualisant que les détails les plus contributifs visuellement en avortant notre parcours de grille à une distance variable selon la fréquence. Nous présentons ensuite d'autres travaux concernant ce même type de brouillard : l'utilisation de la transformée en ondelettes pour représenter sa densité via une grille non-uniforme, la génération automatique de cartes de densité et son animation à base de fractales, et enfin un début d'illumination temps-réel du brouillard en simple diffusion. Dans une seconde partie, nous nous intéressons à la modélisation, l'illumination en simple diffusion et au rendu temps-réel de fumée (sans simulation physique) sur GPU. Notre méthode s'inspire des Light Propagation Volumes (volume de propagation de lumière), une technique à l'origine uniquement destinée à la propagation de la lumière indirecte de manière complètement diffuse, après un premier rebond sur la géométrie. Nous l'adaptons pour l'éclairage direct, et l'illumination des surfaces et milieux participants en simple diffusion. Le milieu est fourni sous forme d'un ensemble de bases radiales (blobs), puis est transformé en un ensemble de voxels, ainsi que les surfaces solides, de manière à disposer d'une représentation commune. Par analogie aux LPV, nous introduisons un Occlusion Propagation Volume, dont nous nous servons, pour calculer l'intégrale de la densité optique entre chaque source et chaque autre cellule contenant soit un voxel du milieu, soit un voxel issu d'une surface. Cette étape est intégrée à la boucle de rendu, ce qui permet d'animer le milieu participant ainsi que les sources de lumière sans contrainte particulière. Nous simulons tous types d'ombres : dues au milieu ou aux surfaces, projetées sur le milieu ou les surfacesThis thesis deals with modeling, illuminating and rendering participating media in real-time using graphics hardware. In a first part, we begin by developing a method to render heterogeneous layers of fog for outdoor scenes. The medium is modeled horizontally using a 2D Haar or linear/quadratic B-Spline function basis, whose coefficients can be loaded from a fogmap, i.e. a grayscale density image. In order to give to the fog its vertical thickness, it is provided with a coefficient parameterizing the extinction of the density when the altitude to the fog increases. To prepare the rendering step, we apply a wavelet transform on the fog's density map, and extract a coarse approximation and a series of layers of details (B-Spline wavelet bases).These details are ordered according to their frequency and, when summed back together, can reconstitute the original density map. Each of these 2D function basis can be viewed as a grid of coefficients. At the rendering step on the GPU, each of these grids is traversed step by step, cell by cell, since the viewer's position to the nearest solid surface. Thanks to our separation of the different frequencies of details at the precomputations step, we can optimize the rendering by only visualizing details that contribute most to the final image and abort our grid traversal at a distance depending on the grid's frequency. We then present other works dealing with the same type of fog: the use of the wavelet transform to represent the fog's density in a non-uniform grid, the automatic generation of density maps and their animation based on Julia fractals, and finally a beginning of single-scattering illumination of the fog, where we are able to simulate shadows by the medium and the geometry. In a second time, we deal with modeling, illuminating and rendering full 3D single-scattering sampled media such as smoke (without physical simulation) on the GPU. Our method is inspired by light propagation volumes, a technique whose only purpose was, at the beginning, to propagate fully diffuse indirect lighting. We adapt it to direct lighting, and the illumination of both surfaces and participating media. The medium is provided under the form of a set of radial bases (blobs), and is then transformed into a set of voxels, together with solid surfaces, so that both entities can be manipulated more easily under a common form. By analogy to the LPV, we introduce an occlusion propagation volume, which we use to compute the integral of the optical density, between each source and each other cell containing a voxel either generated from the medium, or from a surface. This step is integrated into the rendering process, which allows to animate participating media and light sources without any further constraintPARIS-EST-Université (770839901) / SudocSudocFranceF
    corecore