372 research outputs found

    Modeling and real-time rendering of participating media using the GPU

    Get PDF
    Cette thèse traite de la modélisation, l'illumination et le rendu temps-réel de milieux participants à l'aide du GPU. Dans une première partie, nous commençons par développer une méthode de rendu de nappes de brouillard hétérogènes pour des scènes en extérieur. Le brouillard est modélisé horizontalement dans une base 2D de fonctions de Haar ou de fonctions B-Spline linéaires ou quadratiques, dont les coefficients peuvent être chargés depuis une textit{fogmap}, soit une carte de densité en niveaux de gris. Afin de donner au brouillard son épaisseur verticale, celui-ci est doté d'un coefficient d'atténuation en fonction de l'altitude, utilisé pour paramétrer la rapidité avec laquelle la densité diminue avec la distance au milieu selon l'axe Y. Afin de préparer le rendu temps-réel, nous appliquons une transformée en ondelettes sur la carte de densité du brouillard, afin d'en extraire une approximation grossière (base de fonctions B-Spline) et une série de couches de détails (bases d'ondelettes B-Spline), classés par fréquence.%Les détails sont ainsi classés selon leur fréquence et, additionnées, permettent de retrouver la carte de densité d'origine. Chacune de ces bases de fonctions 2D s'apparente à une grille de coefficients. Lors du rendu sur GPU, chacune de ces grilles est traversée pas à pas, case par case, depuis l'observateur jusqu'à la plus proche surface solide. Grâce à notre séparation des différentes fréquences de détails lors des pré-calculs, nous pouvons optimiser le rendu en ne visualisant que les détails les plus contributifs visuellement en avortant notre parcours de grille à une distance variable selon la fréquence. Nous présentons ensuite d'autres travaux concernant ce même type de brouillard : l'utilisation de la transformée en ondelettes pour représenter sa densité via une grille non-uniforme, la génération automatique de cartes de densité et son animation à base de fractales, et enfin un début d'illumination temps-réel du brouillard en simple diffusion. Dans une seconde partie, nous nous intéressons à la modélisation, l'illumination en simple diffusion et au rendu temps-réel de fumée (sans simulation physique) sur GPU. Notre méthode s'inspire des Light Propagation Volumes (volume de propagation de lumière), une technique à l'origine uniquement destinée à la propagation de la lumière indirecte de manière complètement diffuse, après un premier rebond sur la géométrie. Nous l'adaptons pour l'éclairage direct, et l'illumination des surfaces et milieux participants en simple diffusion. Le milieu est fourni sous forme d'un ensemble de bases radiales (blobs), puis est transformé en un ensemble de voxels, ainsi que les surfaces solides, de manière à disposer d'une représentation commune. Par analogie aux LPV, nous introduisons un Occlusion Propagation Volume, dont nous nous servons, pour calculer l'intégrale de la densité optique entre chaque source et chaque autre cellule contenant soit un voxel du milieu, soit un voxel issu d'une surface. Cette étape est intégrée à la boucle de rendu, ce qui permet d'animer le milieu participant ainsi que les sources de lumière sans contrainte particulière. Nous simulons tous types d'ombres : dues au milieu ou aux surfaces, projetées sur le milieu ou les surfacesThis thesis deals with modeling, illuminating and rendering participating media in real-time using graphics hardware. In a first part, we begin by developing a method to render heterogeneous layers of fog for outdoor scenes. The medium is modeled horizontally using a 2D Haar or linear/quadratic B-Spline function basis, whose coefficients can be loaded from a fogmap, i.e. a grayscale density image. In order to give to the fog its vertical thickness, it is provided with a coefficient parameterizing the extinction of the density when the altitude to the fog increases. To prepare the rendering step, we apply a wavelet transform on the fog's density map, and extract a coarse approximation and a series of layers of details (B-Spline wavelet bases).These details are ordered according to their frequency and, when summed back together, can reconstitute the original density map. Each of these 2D function basis can be viewed as a grid of coefficients. At the rendering step on the GPU, each of these grids is traversed step by step, cell by cell, since the viewer's position to the nearest solid surface. Thanks to our separation of the different frequencies of details at the precomputations step, we can optimize the rendering by only visualizing details that contribute most to the final image and abort our grid traversal at a distance depending on the grid's frequency. We then present other works dealing with the same type of fog: the use of the wavelet transform to represent the fog's density in a non-uniform grid, the automatic generation of density maps and their animation based on Julia fractals, and finally a beginning of single-scattering illumination of the fog, where we are able to simulate shadows by the medium and the geometry. In a second time, we deal with modeling, illuminating and rendering full 3D single-scattering sampled media such as smoke (without physical simulation) on the GPU. Our method is inspired by light propagation volumes, a technique whose only purpose was, at the beginning, to propagate fully diffuse indirect lighting. We adapt it to direct lighting, and the illumination of both surfaces and participating media. The medium is provided under the form of a set of radial bases (blobs), and is then transformed into a set of voxels, together with solid surfaces, so that both entities can be manipulated more easily under a common form. By analogy to the LPV, we introduce an occlusion propagation volume, which we use to compute the integral of the optical density, between each source and each other cell containing a voxel either generated from the medium, or from a surface. This step is integrated into the rendering process, which allows to animate participating media and light sources without any further constraintPARIS-EST-Université (770839901) / SudocSudocFranceF

    Efficient Many-Light Rendering of Scenes with Participating Media

    Get PDF
    We present several approaches based on virtual lights that aim at capturing the light transport without compromising quality, and while preserving the elegance and efficiency of many-light rendering. By reformulating the integration scheme, we obtain two numerically efficient techniques; one tailored specifically for interactive, high-quality lighting on surfaces, and one for handling scenes with participating media

    Image based analysis of visibility in smoke laden environments

    Get PDF
    This study investigates visibility in a smoke laden environment. For many years, researchers and engineers in fire safety have criticized the inadequacy of existing theory in describing the effects such as colour, viewing angle, environmental lighting etc. on the visibility of an emergency sign. In the current study, the author has raised the fundamental question on the concept of visibility and how it should be measured in fire safety engineering and tried to address the problem by redefining visibility based on the perceived image of a target sign. New algorithms have been created during this study to utilise modern hardware and software technology in the simulation of human perceived image of object in both experiment and computer modelling. Unlike the traditional threshold of visual distance, visibility in the current study has been defined as a continuous function changing from clearly discemable to completely invisible. It allows the comparison of visibility under various conditions, not just limited to the threshold. Current experiment has revealed that different conditions may results in the same visual threshold but follow very different path on the way leading to the threshold. The new definition of visibility has made the quantification of visibility in the pre-threshold conditions possible. Such quantification can help to improve the performance of fire evacuation since most evacuees will experience the pre-threshold condition. With current measurement of visibility, all the influential factors such as colour, viewing angle etc. can be tested in experiment and simulated in numerical model. Based on the newly introduced definition of visibility, a set of experiments have been carried output in a purposed built smoke tunnel. Digital camera images of various illuminated signs were taken under different illumination, colour and smoke conditions. Using an algorithm developed by the author in this study, the digital camera images were converted into simulated human perceived images. The visibility of a target sign is measured against the quality of its image acquired. Conclusions have been drawn by comparing visibility under different conditions. One of them is that signs illuminated with red and green lights have the similar visibility that is far better than that with blue light. It is the first time this seemingly obvious conclusion has been quantified. In the simulation of visibility in participating media, the author has introduced an algorithm that combines irradiance catching in 3D space with Monte Carlo ray tracing. It can calculate the distribution of scattered radiation with good accuracy without the high cost typically related to zonal method and the limitations in discrete ordinate method. The algorithm has been combined with a two pass solution method to produce high resolution images without introducing excessive number of rays from the light source. The convergence of the iterative solution procedure implemented has been theoretically proven. The accuracy of the model is demonstrated by comparing with the analytical solution of a point radiant source in 3D space. Further validation of the simulation model has been carried out by comparing the model prediction with the data from the smoke tunnel experiments. The output of the simulation model has been presented in the form of an innovative floor map of visibility (FMV). It helps the fire safety designer to identify regions of poor visibility in a glance and will prove to be a very useful tool in performance based fire safety design

    Image based analysis of visibility in smoke laden environments

    Get PDF
    This study investigates visibility in a smoke laden environment. For many years, researchers and engineers in fire safety have criticized the inadequacy of existing theory in describing the effects such as colour, viewing angle, environmental lighting etc. on the visibility of an emergency sign. In the current study, the author has raised the fundamental question on the concept of visibility and how it should be measured in fire safety engineering and tried to address the problem by redefining visibility based on the perceived image of a target sign. New algorithms have been created during this study to utilise modern hardware and software technology in the simulation of human perceived image of object in both experiment and computer modelling. Unlike the traditional threshold of visual distance, visibility in the current study has been defined as a continuous function changing from clearly discemable to completely invisible. It allows the comparison of visibility under various conditions, not just limited to the threshold. Current experiment has revealed that different conditions may results in the same visual threshold but follow very different path on the way leading to the threshold. The new definition of visibility has made the quantification of visibility in the pre-threshold conditions possible. Such quantification can help to improve the performance of fire evacuation since most evacuees will experience the pre-threshold condition. With current measurement of visibility, all the influential factors such as colour, viewing angle etc. can be tested in experiment and simulated in numerical model.Based on the newly introduced definition of visibility, a set of experiments have been carried output in a purposed built smoke tunnel. Digital camera images of various illuminated signs were taken under different illumination, colour and smoke conditions. Using an algorithm developed by the author in this study, the digital camera images were converted into simulated human perceived images. The visibility of a target sign is measured against the quality of its image acquired. Conclusions have been drawn by comparing visibility under different conditions. One of them is that signs illuminated with red and green lights have the similar visibility that is far better than that with blue light. It is the first time this seemingly obvious conclusion has been quantified.In the simulation of visibility in participating media, the author has introduced an algorithm that combines irradiance catching in 3D space with Monte Carlo ray tracing. It can calculate the distribution of scattered radiation with good accuracy without the high cost typically related to zonal method and the limitations in discrete ordinate method. The algorithm has been combined with a two pass solution method to produce high resolution images without introducing excessive number of rays from the light source. The convergence of the iterative solution procedure implemented has been theoretically proven. The accuracy of the model is demonstrated by comparing with the analytical solution of a point radiant source in 3D space. Further validation of the simulation model has been carried out by comparing the model prediction with the data from the smoke tunnel experiments.The output of the simulation model has been presented in the form of an innovative floor map of visibility (FMV). It helps the fire safety designer to identify regions of poor visibility in a glance and will prove to be a very useful tool in performance based fire safety design

    Interactive Rendering of Scattering and Refraction Effects in Heterogeneous Media

    Get PDF
    In this dissertation we investigate the problem of interactive and real-time visualization of single scattering, multiple scattering and refraction effects in heterogeneous volumes. Our proposed solutions span a variety of use scenarios: from a very fast yet physically-based approximation to a physically accurate simulation of microscopic light transmission. We add to the state of the art by introducing a novel precomputation and sampling strategy, a system for efficiently parallelizing the computation of different volumetric effects, and a new and fast version of the Discrete Ordinates Method. Finally, we also present a collateral work on real-time 3D acquisition devices

    Photorealistic physically based render engines: a comparative study

    Full text link
    Pérez Roig, F. (2012). Photorealistic physically based render engines: a comparative study. http://hdl.handle.net/10251/14797.Archivo delegad

    Hardware-supported cloth rendering

    Get PDF
    Many computer graphics applications involve rendering humans and their natural surroundings, which inevitably requires displaying textiles. To accurately resemble the appearance of e.g. clothing or furniture, reflection models are needed which are capable of modeling the highly complex reflection effects exhibited by textiles. This thesis focuses on generating realistic high quality images of textiles by developing suitable reflection models and introducing algorithms for illumination computation of cloth surfaces. As efficiency is essential for illumination computation, we additionally place great importance on exploiting graphics hardware to achieve high frame rates. To this end, we present a variety of hardware-accelerated methods to compute the illumination in textile micro geometry. We begin by showing how indirect illumination and shadows can be efficiently accounted for in heightfields, parametric surfaces, and triangle meshes. Using these methods, we can considerably speed up the computation of data structures like tabular bidirectional reflectance distribution functions (BRDFs) and bidirectional texture functions (BTFs), and also efficiently illuminate heightfield geometry and bump maps. Furthermore, we develop two shading models, which account for all important reflection properties exhibited by textiles. While the first model is suited for rendering textiles with general micro geometry, the second, based on volumetric textures, is specially tailored for rendering knitwear. To apply the second model e.g. to the triangle mesh of a garment, we finally introduce a new rendering algorithm for displaying semi-transparent volumetric textures at high interactive rates.Eine Vielzahl von Anwendungen in der Computergraphik schließen auch die Darstellung von Menschen und deren natürlicher Umgebung ein, was zwangsläufig auch die Darstellung von Textilien erfordert. Um beispielsweise das Aussehen von Bekleidung oder Möbeln genau zu erfassen, werden Reflexionsmodelle benötigt, die in der Lage sind, die hochkomplexen Reflexionseffekte von Textilien zu berücksichtigen. Der Schwerpunkt dieser Dissertation liegt in der Generierung qualitativ hochwertiger Bilder von Textilien, was wir durch die Entwicklung geeigneter Reflexionsmodelle und von Algorithmen zur Beleuchtungsberechnung an Stoffoberflächen ermöglichen. Da Effizienz essentiell für die Beleuchtungsberechnung ist, nutzen wir die Möglichkeiten von Graphikhardware aus, um hohe Bildwiederholraten zu erzielen. Hierfür legen wir eine Vielzahl von hardware-beschleunigten Methoden zur Beleuchtungsberechnung der Mikrogeometrie von Textilien vor. Zuerst zeigen wir, wie indirekte Beleuchtung und Schatten effizient in Höhenfeldern, parametrischen Flächen und Dreiecksnetzen berücksichtigt werden können. Mit Hilfe dieser Methoden kann die Berechnung von Datenstrukturen wie tabellarischer bidirectional reflectance distribution functions (BRDFs) und bidirectional texture functions (BTFs) erheblich beschleunigt, sowie die Beleuchtung von Höhenfeld-Geometrie und Bumpmaps effizient errechnet werden.Weiterhin entwickeln wir zwei Reflexionsmodelle, welche alle wichtigen Reflexionseigenschaften berücksichtigen, die Textilien aufweisen. Während das erste Modell sich zur Darstellung von Textilien mit allgemeiner Mikrogeometrie eignet, ist das zweite, welches auf volumetrischen Texturen basiert, speziell auf die Darstellung von Strickwaren zugeschnitten. Um das zweite Modell z.B. auf das Dreiecksnetz eines Bekleidungsstückes anzuwenden führen wir einen neuen Renderingalgorithmus für die Darstellung von semi-transparenten volumetrischen Texturen mit hohen Bildwiederholraten ein

    Appearance Changes due to Light Exposure

    Get PDF
    The fading of materials due to light exposure over time is a major contributor to the overall aged appearance of man-made objects. Although much attention has been devoted to the modeling of aging and weathering phenomena over the last decade, comparatively little attention has been paid to fading effects. In this dissertation, we present a theoretical framework for the physically-based simulation of time-dependent spectral changes induced by absorbed radiation. This framework relies on the general volumetric radiative transfer theory, and it employs a physicochemical approach to account for variations in the absorptive properties of colourants. Employing this framework, a layered fading model that can be readily integrated into existing rendering systems is developed using the Kubelka-Munk theory. We evaluate its correctness through comparisons of measured and simulated fading results. Challenges in the acquisition of reliable measurements are discussed. The performance characteristics of the proposed model are analysed, and techniques for improving the runtime cost are outlined. Finally, we demonstrate the effectiveness of this model through renderings depicting typical fading scenarios

    Automated inverse-rendering techniques for realistic 3D artefact compositing in 2D photographs

    Get PDF
    PhD ThesisThe process of acquiring images of a scene and modifying the defining structural features of the scene through the insertion of artefacts is known in literature as compositing. The process can take effect in the 2D domain (where the artefact originates from a 2D image and is inserted into a 2D image), or in the 3D domain (the artefact is defined as a dense 3D triangulated mesh, with textures describing its material properties). Compositing originated as a solution to enhancing, repairing, and more broadly editing photographs and video data alike in the film industry as part of the post-production stage. This is generally thought of as carrying out operations in a 2D domain (a single image with a known width, height, and colour data). The operations involved are sequential and entail separating the foreground from the background (matting), or identifying features from contour (feature matching and segmentation) with the purpose of introducing new data in the original. Since then, compositing techniques have gained more traction in the emerging fields of Mixed Reality (MR), Augmented Reality (AR), robotics and machine vision (scene understanding, scene reconstruction, autonomous navigation). When focusing on the 3D domain, compositing can be translated into a pipeline 1 - the incipient stage acquires the scene data, which then undergoes a number of processing steps aimed at inferring structural properties that ultimately allow for the placement of 3D artefacts anywhere within the scene, rendering a plausible and consistent result with regard to the physical properties of the initial input. This generic approach becomes challenging in the absence of user annotation and labelling of scene geometry, light sources and their respective magnitude and orientation, as well as a clear object segmentation and knowledge of surface properties. A single image, a stereo pair, or even a short image stream may not hold enough information regarding the shape or illumination of the scene, however, increasing the input data will only incur an extensive time penalty which is an established challenge in the field. Recent state-of-the-art methods address the difficulty of inference in the absence of 1In the present document, the term pipeline refers to a software solution formed of stand-alone modules or stages. It implies that the flow of execution runs in a single direction, and that each module has the potential to be used on its own as part of other solutions. Moreover, each module is assumed to take an input set and output data for the following stage, where each module addresses a single type of problem only. data, nonetheless, they do not attempt to solve the challenge of compositing artefacts between existing scene geometry, or cater for the inclusion of new geometry behind complex surface materials such as translucent glass or in front of reflective surfaces. The present work focuses on the compositing in the 3D domain and brings forth a software framework 2 that contributes solutions to a number of challenges encountered in the field, including the ability to render physically-accurate soft shadows in the absence of user annotate scene properties or RGB-D data. Another contribution consists in the timely manner in which the framework achieves a believable result compared to the other compositing methods which rely on offline rendering. The availability of proprietary hardware and user expertise are two of the main factors that are not required in order to achieve a fast and reliable results within the current framework

    The Disappearing Frame.: A Practice-based investigation into composing virtual environment artworks

    Get PDF
    Through creative art making practice, research seeks to contribute a body of knowledge to an under researched area by examining how key concepts germane to computer based, interactive, three-dimensional, virtual environment artworks might be explicated, potential compositional issues characterised, and possible production strategies identified and/or proposed. Initial research summarises a range of classifications pertaining to the function of interactivity within virtual space, leading to an identification and analysis of a predominant model for composing virtual environment media, characterised as the "world as model": a methodological approach to devising interactive and spatial contexts employing visual and behavioural modes based on the physical world. Following this alternative forms of environmental organisation are examined through the development of a series of artworks beginning with Bodies and Bethlem, and culminating with Reconnoitre: a networked environment, spatially manifest through performative user input. Theoretical corollaries to the project are identified placing it within a wider critical context concerned with distinguishing between the virtual as a condition of simulation: a representation of something pre-existing, and the virtual as potential structure: a phenomena in itself requiring creative actualisation and orientated toward change. This distinction is further developed through an analysis of some existing typologies of interactive computer based art, and used to generalise two base conditions between which various possibilities for practice might be situated: the "fluid" and "formatted" virtual
    • …
    corecore