11 research outputs found

    Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework

    Get PDF
    The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license

    Metropolis light transport for participating media

    Get PDF
    We show how Metropolis light transport can be extended both in the underlying theoretical framework and the algorithmic implementation to incorporate volumetric scattering. We present a generalization of the path integral formulation that handles anisotropic scattering in non-homogeneous media. Based on this framework we introduce a new mutation strategy that is specifically designed for participating media. Our algorithm includes effects such as volume caustics and multiple volume scattering, is not restricted to certain classes of geometry and scattering models and has minimal memory requirements. Furthermore, it is unbiased and robust, in the sense that it produces satisfactory results for a wide range of input scenes and lighting situations within acceptable time bound

    Simulación de nubes volumétricas

    Get PDF
    Simular una atmósfera realista supone tener en cuenta un fenómeno que podemos vercasi cada día en el cielo: las nubes. Este fenómeno compuesto de partículas de agua y/ohielo puede parecer fácil de representar, pero esconde una geometría fractal que lo com-plica. Por esto mismo, a lo largo de los años han surgido diferentes estudios que tratan deencontrar un modelo para ello. Además, otra parte importante de estas es su iluminación:cuando un rayo de luz atraviesa una nube, los fotones que componen la primera pueden serdispersados o absorbidos por las partículas de la segunda. Simular este comportamiento esuna tarea difícil.En este trabajo de fin de grado se va a estudiar una de las técnicas que existen para lasimulación de nubes volumétricas. En el modelo estudiado se utilizan varias texturas 2D y3D creadas mediante ruido procedural (Perlin y Worley). Estas texturas se utilizan paradefinir dónde, con qué forma y detalle se van a mostrar las nubes. Para visualizarlas seutiliza un algoritmo llamadoray marchingque calcula la densidad e iluminación de la nubepor cada iteración. En el caso de la iluminación, se utilizan dos funciones para aproximarla:la ley de Beer y la función de fase de Henyey-Greenstein.Además, se ha implementado una aplicación que permite visualizar nubes, permitiendocambiar la cantidad que hay en el cielo, su densidad, la altura a la que se encuentran, suiluminación o la posición del sol. También es posible hacer que estas se muevan en unadirección y con una velocidad fijada

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    Efficient Many-Light Rendering of Scenes with Participating Media

    Get PDF
    We present several approaches based on virtual lights that aim at capturing the light transport without compromising quality, and while preserving the elegance and efficiency of many-light rendering. By reformulating the integration scheme, we obtain two numerically efficient techniques; one tailored specifically for interactive, high-quality lighting on surfaces, and one for handling scenes with participating media

    Actas do 10º Encontro Português de Computação Gráfica

    Get PDF
    Actas do 10º Encontro Portugês de Computação Gráfica, Lisboa, 1-3 de Outubro de 2001A investigação, o desenvolvimento e o ensino na área da Computação Gráfica constituem, em Portugal, uma realidade positiva e de largas tradições. O Encontro Português de Computação Gráfica (EPCG), realizado no âmbito das actividades do Grupo Português de Computação Gráfica (GPCG), tem permitido reunir regularmente, desde o 1º EPCG realizado também em Lisboa, mas no já longínquo mês de Julho de 1988, todos os que trabalham nesta área abrangente e com inúmeras aplicações. Pela primeira vez no historial destes Encontros, o 10º EPCG foi organizado em ligação estreita com as comunidades do Processamento de Imagem e da Visão por Computador, através da Associação Portuguesa de Reconhecimento de Padrões (APRP), salientando-se, assim, a acrescida colaboração, e a convergência, entre essas duas áreas e a Computação Gráfica. Este é o livro de actas deste 10º EPCG.INSATUniWebIcep PortugalMicrografAutodes

    Lancer de photons multi-passes et écrasement de photons pour le rendu optronique

    Get PDF
    La simulation de l'éclairage par illumination globale a fait l'objet de nombreuses recherches et applications au cours des dernières années. Tout d'abord utilisée dans le domaine visible, la simulation est aujourd'hui de plus en plus appliquée au rendu infrarouge. On appelle optronique l'union de ces deux domaines. Le problème principal des méthodes d'illumination globale actuelles provient de la difficulté à traiter le phénomène de diffusion de la lumière, aussi bien dans le cas des surfaces que des milieux participants. Ces méthodes offrent des résultats satisfaisants dans le cas de scènes simples, mais les performances s'effrondrent lorsque la complexité augmente. Dans la première partie de cette thèse, nous exposons la nécessité de la prise en compte des phénomènes de diffusion pour la simulation optronique. Dans la deuxième partie nous posons les équations qui unifient les différentes méthodes de synthèse d'image, c'est à dire l'équation du rendu et l'équation volumique du transfert radiatif. L'état de l'art des méthodes d'illumination globale présenté dans la troisième partie montre qu'à l'heure actuelle la méthode des cartes de photons est celle qui offre le meilleur compromis performance/qualité. Néanmoins, la qualité des résultats obtenus grâce à cette méthode est dépendante du nombre de photons qui peuvent être stockés et donc de la quantité de mémoire disponible. Dans la quatrième partie de la thèse, nous proposons une évolution de la méthode, le lancer de photons multi-passes, qui permet de lever cette dépendance mémoire, et ainsi d'obtenir une très grande qualité sans pour autant utiliser une configuration matérielle onéreuse. Un autre problème de la méthode des cartes de photons est le temps de calcul important nécessaire lors du rendu de milieux participants. Dans la cinquième et dernière partie de cette thèse, nous proposons une méthode, l'écrasement de photons volumique, qui prend avantage de l'estimation de densité pour reconstruire efficacement la luminance volumique à partir de la carte de photons. Notre idée est d'isoler le calcul de la diffusion et d'utiliser une approche duale de l'estimation de densité pour l'optimiser car il constitue la partie coûteuse du calcul. Bien que les temps de rendu obtenus par notre méthode sont meilleurs que ceux obtenus en utilisant la méthode des cartes de photons pour la même qualité, nous proposons aussi une optimisation de la méthode utilisant les nouvelles capacités des cartes graphiques.Much research have been done on global illumination simulation. Firstly used in the visible spectrum domain, today, simulation is more and more applied to infrared rendering. The union of these two domains is called optronic. The main problem of the current global illumination methods comes from the complexity of the light scattering phenomena, as well for surfaces as for participating media. These methods offer satisfactory results for simple scenes, but performances crash when complexity raises. In the first part of this thesis, we expose the necessity to take scattering phenomena into account for optronic simulation. In the second part, we pose the equations that unify all global illumination methods, i.e. the rendering equation and the volume radiative tranfer equation. The state of the art presented in the third part shows that the Photon Mapping method is, at this moment, the one that offers the better compromise between performance and quality. Nevertheless, the quality of the results obtained with this method depends on the number of photons that can be stocked, and then on the available memory. In the fourth part, we propose an evolution of the method, called Multipass Photon Mapping, which permits to get rid of this memory dependency, and hence, to achieve a great accuracy without using a costly harware configuration. Another problem inherent to Photon Mapping, is the enormous rendering time needed for participating media rendering. In the fifth and last part of this thesis, we propose a method, called Volume Photon Splatting, which takes advantage of density estimation to efficiently reconstruct volume radiance from the photon map. Our idea is to separate the computation of emission, absorption and out-scattering from the computation of in-scattering. Then we use a dual approach of density estimation to optimize this last part as it is the most computational expensive. Our method extends Photon Splatting, which optimizes the computation time of Photon Mapping for surface rendering, to participating media, and then considerably reduce participating media rendering times. Even though our method is faster than Photon Mapping for equal quality, we also propose a GPU based optimization of our algorithm

    LightSkin: Globale Echtzeitbeleuchtung für Virtual und Augmented Reality

    Get PDF
    In nature, each interaction of light is bound to a global context. Thus, each observable natural light phenomenon is the result of global illumination. It is based on manifold laws of absorption, reflection, and refraction, which are mostly too complex to simulate given the real-time constraints of interactive applications. Therefore, many interactive applications do not support the simulation of those global illumination phenomena yet, which results in unrealistic and synthetic-looking renderings. This unrealistic rendering becomes especially a problem in the context of virtual reality and augmented reality applications, where the user should experience the simulation as realistic as possible. In this thesis we present a novel approach called LightSkin that calculates global illumination phenomena in real-time. The approach was especially developed for virtual reality and augmented reality applications satisfying several constraints coming along with those applications. As part of the approach we introduce a novel interpolation scheme, which is capable to calculate realistic indirect illumination results based on a few number of supporting points, distributed on model surfaces. Each supporting point creates its own proxy light sources, which are used to represent the whole indirect illumination for this point in a compact manner. These proxy light sources are then linearly interpolated to obtain dense results for the entire visible scene. Due to an efficient implementation on GPU, the method is very fast supporting complex and dynamic scenes. Based on the approach, it is possible to simulate diffuse and glossy indirect reflections, soft shadows, and multiple subsurface scattering phenomena without neglecting filigree surface details. Furthermore, the method can be adapted to augmented reality applications providing mutual global illumination effects between dynamic real and virtual objects using an active RGB-D sensor device. In contrast to existing interactive global illumination approaches, our approach supports all kinds of animations, handling them more efficient, not requiring extra calculations or leading to disturbing temporal artifacts. This thesis contains all information needed to understand, implement, and evaluate the novel LightSkin approach and also provides a comprehensive overview of the related field of research.In der Natur ist jede Interaktion des Lichts mit Materie in einen globalen Kontext eingebunden, weswegen alle natürlichen Beleuchtungsphänomene in unserer Umwelt das Resultat globaler Beleuchtung sind. Diese basiert auf der Anwendung mannigfaltiger Absorptions-, Reflexions- und Brechungsgesetze, deren Simulation so komplex ist, dass interaktive Anwendungen diese nicht in wenigen Millisekunden berechnen können. Deshalb wurde bisher in vielen interaktiven Systemen auf die Abbildung von solchen globalen Beleuchtungsphänomenen verzichtet, was jedoch zu einer unrealistischen und synthetisch-wirkenden Darstellung führte. Diese unrealistische Darstellung ist besonders für die Anwendungsfelder Virtual Reality und Augmented Reality, bei denen der Nutzer eine möglichst realitätsnahe Simulation erfahren soll, ein gewichtiger Nachteil. In dieser Arbeit wird das LightSkin-Verfahren vorgestellt, das es erlaubt, globale Beleuchtungsphänomene in einer Echtzeitanwendung darzustellen. Das Verfahren wurde speziell für die Anwendungsfelder Virtual Reality und Augmented Reality entwickelt und erfüllt spezifische Anforderungen, die diese an eine Echtzeitanwendung stellen. Bei dem Verfahren wird das indirekte Licht durch eine geringe Anzahl von Punktlichtquellen (Proxy-Lichtquellen) repräsentiert, die für eine lose Menge von Oberflächenpunkten (Caches) berechnet und anschließend über die komplette sichtbare Szene interpoliert werden. Diese neue Form der Repräsentation der indirekten Beleuchtung erlaubt eine effiziente Berechnung von diffusen und glänzenden indirekten Reflexionen, die Abbildung von weichen Schatten und die Simulation von Multiple-Subsurface-Scattering-Effekten in Echtzeit für komplexe und voll dynamische Szenen. Ferner wird gezeigt, wie das Verfahren modifiziert werden kann, um globale Lichtwechselwirkungen zwischen realen und virtuellen Objekten in einer Augmented-Reality-Anwendung zu simulieren. Im Gegensatz zu den meisten existierenden Echtzeitverfahren zur Simulation von globalen Beleuchtungseffekten benötigt der hier vorgestellte Ansatz keine aufwändigen zusätzlichen Berechnungen bei Animationen und erzeugt darüber hinaus für diese keine visuellen Artefakte. Diese Arbeit enthält alle Informationen, die zum Verständnis, zur Implementierung und zur Evaluation des LightSkin-Verfahrens benötigt werden und gibt darüber hinaus einen umfassenden Über- blick über das Forschungsfeld

    A Rendering Algorithm for Discrete Volume Density Objects

    No full text
    : We present a new algorithm for simulating the effect of light travelling through volume objects. Such objects (haze, fog, clouds...) are usually modelized by voxel grids which define their density distribution in a discrete tridimensional space. The method we propose is a two-pass Monte-Carlo ray-tracing algorithm that does not make any restrictive assumptions neither about the characteristics of the objects (both arbitrary density distributions and phase functions are allowed) nor about the physical phenomena included in the rendering process (multiple scattering is accounted for). The driving idea of the algorithm is to use the phase function for Monte-Carlo sampling, in order to modify the direction of the ray during scattering. Keywords : Monte-Carlo Ray-Tracing, Participating Media, Volume Density Objects, Multiple Scattering. 1 Introduction Rendering methods of objects defined by their density distribution in a tridimensional space can be divided in view-dependent and view-i..

    Simulación visual de materiales : teoría, técnicas, análisis de casos

    Get PDF
    Descripció del recurs: 29 de gener de 2016La simulación de materiales tiene una gran importancia, teórica y práctica, desde múltiples puntos de vista y aplicaciones profesionales. Es un requisito fundamental para la creación de escenarios virtuales y está imbricada en el propio proceso de diseño. Pues los colores, texturas, reflejos o transparencias, modifican las formas y espacios que percibimos. Las posibilidades que se han abierto a partir del desarrollo de nuevos recursos de interacción virtual, abren vías que solo desde hace pocos años estamos comenzando a asimilar. Este libro, que se publica en paralelo con otro sobre Simulación visual de la iluminación, abarca todo lo implicado en esta temática, tanto desde un punto de vista teórico y conceptual, a lo largo de su primera parte, como por medio de una explicación pormenorizada, a lo largo de su segunda parte, de las principales técnicas con que contamos en la actualidad, proporcionando ejemplos relevantes para diferentes aplicaciones, principalmente en arquitectura y diseño
    corecore