32 research outputs found

    Atmospheric cloud representation methods in computer graphics: A review

    Get PDF
    Cloud representation is one of the important components in the atmospheric cloud visualization system. Lack of review papers on the cloud representation methods available in the area of computer graphics has directed towards the difficulty for researchers to understand the appropriate solutions. Therefore, this paper aims to provide a comprehensive review of the atmospheric cloud representation methods that have been proposed in the computer graphics domain, involving the classical and the current state-of-the-art approaches. The reviewing process was conducted by searching, selecting, and analyzing the prominent articles collected from online digital libraries and search engines. We highlighted the taxonomic classification of the existing cloud representation methods in solving the atmospheric cloud-related problems. Finally, research issues and directions in the area of cloud representations and visualization have been discussed. This review would be significantly beneficial for researchers to clearly understand the general picture of the existing methods and thus helping them in choosing the best-suited approach for their future research and development

    Efficient rendering of atmospheric phenomena

    Get PDF
    Journal ArticleRendering of atmospheric bodies involves modeling the complex interaction of light throughout the highly scattering medium of water and air particles. Scattering by these particles creates many well-known atmospheric optical phenomena including rainbows, halos, the corona, and the glory. Unfortunately, most radiative transport approximations in computer graphics are ill-suited to render complex angularly dependent effects in the presence of multiple scattering at reasonable frame rates. Therefore, this paper introduces a multiple-model lighting system that efficiently captures these essential atmospheric effects. We have solved the rendering of fine angularly dependent effects in the presence of multiple scattering by designing a lighting approximation based upon multiple scattering phase functions. This model captures gradual blurring of chromatic atmospheric optical phenomena by handling the gradual angular spreading of the sunlight as it experiences multiple scattering events with anisotropic scattering particles. It has been designed to take advantage of modern graphics hardware; thus, it is capable of rendering these effects at near interactive frame rates

    Rendering of light shaft and shadow for indoor environments enhancing technique

    Get PDF
    The ray marching methods have become the most attractive method to provide realism in rendering the effects of light scattering in the participating media of numerous applications. This has attracted significant attention from the scientific community. Up-sampling of ray marching methods is suitable to evaluate light scattering effects such as volumetric shadows and light shafts for rendering realistic scenes, but suffers of cost a lot for rendering. Therefore, some encouraging outcomes have been achieved by using down-sampling of ray marching approach to accelerate rendered scenes. However, these methods are inherently prone to artifacts, aliasing and incorrect boundaries due to the reduced number of sample points along view rays. This study proposed a new enhancing technique to render light shafts and shadows taking into consideration the integration light shafts, volumetric shadows, and shadows for indoor environments. This research has three major phases that cover species of the effects addressed in this thesis. The first phase includes the soft volumetric shadows creation technique called Soft Bilateral Filtering Volumetric Shadows (SoftBiF-VS). The soft shadow was created using a new algorithm called Soft Bilateral Filtering Shadow (SBFS). This technique was started by developing an algorithm called Imperfect Multi-View Soft Shadows (IMVSSs) based on down-sampling multiple point lights (DMPLs) and multiple depth maps, which are processed by using bilateral filtering to obtain soft shadows. Then, down-sampling light scattering model was used with (SBFS) to create volumetric shadows, which was improved using cross-bilateral filter to get soft volumetric shadows. In the second phase, soft light shaft was generated using a new technique called Realistic Real-Time Soft Bilateral Filtering Light Shafts (realTiSoftLS). This technique computed the light shaft depending on down-sampling volumetric light model and depth test, and was interpolated by bilateral filtering to gain soft light shafts. Finally, an enhancing technique for integrating all of these effects that represent the third phase of this research was achieved. The performance of the new enhanced technique was evaluated quantitatively and qualitatively a measured using standard dataset. Results from the experiment showed that 63% of the participants gave strong positive responses to this technique of improving realism. From the quantitative evaluation, the results revealed that the technique has dramatically outpaced the stateof- the-art techniques with a speed of 74 fps in improving the performance for indoor environments

    Autómata de Lattice Boltzmann para modelar la difusión óptica en materiales translúcidos

    Get PDF
    La interrogación de objetos traslúcidos mediante luz láser en el rango infrarrojo cercano es una técnica para recabar información tomográfica que está siendo usada cada vez más en diagnóstico médico y en inspecciones industriales. En este trabajo se presenta una estrategia para la simulación de la difusión de luz visible en materiales translúcidos basada en el método de Lattice Bolzmann (LBM). LBM es un autómata celular que simula fenómenos de transporte a nivel macroscópico mediante una representación mesoscópica, muy fácil de implementar y altamente paralelizable. En nuestro caso el transporte de fotones en la materia se modela mediante una matriz de colisión y absorción definida en cada celda del dominio espacial simulado. La grilla de soporte es tridimensional y los resultados son visualizados superponiendo los elementos de una malla triangular. El modelo fue validado con datos experimentales medidos en un fantoma de laboratorio. Se presentan también las posibles aplicaciones del autómata en un motor de visualizaciónSociedad Argentina de Informática e Investigación Operativ

    Atmospheric cloud modeling methods in computer graphics: A review, trends, taxonomy, and future directions

    Get PDF
    The modeling of atmospheric clouds is one of the crucial elements in the natural phenomena visualization system. Over the years, a wide range of approaches has been proposed on this topic to deal with the challenging issues associated with visual realism and performance. However, the lack of recent review papers on the atmospheric cloud modeling methods available in computer graphics makes it difficult for researchers and practitioners to understand and choose the well-suited solutions for developing the atmospheric cloud visualization system. Hence, we conducted a comprehensive review to identify, analyze, classify, and summarize the existing atmospheric cloud modeling solutions. We selected 113 research studies from recognizable data sources and analyzed the research trends on this topic. We defined a taxonomy by categorizing the atmospheric cloud modeling methods based on the methods' similar characteristics and summarized each of the particular methods. Finally, we underlined several research issues and directions for potential future work. The review results provide an overview and general picture of the atmospheric cloud modeling methods that would be beneficial for researchers and practitioners

    Modeling and real-time rendering of participating media using the GPU

    Get PDF
    Cette thèse traite de la modélisation, l'illumination et le rendu temps-réel de milieux participants à l'aide du GPU. Dans une première partie, nous commençons par développer une méthode de rendu de nappes de brouillard hétérogènes pour des scènes en extérieur. Le brouillard est modélisé horizontalement dans une base 2D de fonctions de Haar ou de fonctions B-Spline linéaires ou quadratiques, dont les coefficients peuvent être chargés depuis une textit{fogmap}, soit une carte de densité en niveaux de gris. Afin de donner au brouillard son épaisseur verticale, celui-ci est doté d'un coefficient d'atténuation en fonction de l'altitude, utilisé pour paramétrer la rapidité avec laquelle la densité diminue avec la distance au milieu selon l'axe Y. Afin de préparer le rendu temps-réel, nous appliquons une transformée en ondelettes sur la carte de densité du brouillard, afin d'en extraire une approximation grossière (base de fonctions B-Spline) et une série de couches de détails (bases d'ondelettes B-Spline), classés par fréquence.%Les détails sont ainsi classés selon leur fréquence et, additionnées, permettent de retrouver la carte de densité d'origine. Chacune de ces bases de fonctions 2D s'apparente à une grille de coefficients. Lors du rendu sur GPU, chacune de ces grilles est traversée pas à pas, case par case, depuis l'observateur jusqu'à la plus proche surface solide. Grâce à notre séparation des différentes fréquences de détails lors des pré-calculs, nous pouvons optimiser le rendu en ne visualisant que les détails les plus contributifs visuellement en avortant notre parcours de grille à une distance variable selon la fréquence. Nous présentons ensuite d'autres travaux concernant ce même type de brouillard : l'utilisation de la transformée en ondelettes pour représenter sa densité via une grille non-uniforme, la génération automatique de cartes de densité et son animation à base de fractales, et enfin un début d'illumination temps-réel du brouillard en simple diffusion. Dans une seconde partie, nous nous intéressons à la modélisation, l'illumination en simple diffusion et au rendu temps-réel de fumée (sans simulation physique) sur GPU. Notre méthode s'inspire des Light Propagation Volumes (volume de propagation de lumière), une technique à l'origine uniquement destinée à la propagation de la lumière indirecte de manière complètement diffuse, après un premier rebond sur la géométrie. Nous l'adaptons pour l'éclairage direct, et l'illumination des surfaces et milieux participants en simple diffusion. Le milieu est fourni sous forme d'un ensemble de bases radiales (blobs), puis est transformé en un ensemble de voxels, ainsi que les surfaces solides, de manière à disposer d'une représentation commune. Par analogie aux LPV, nous introduisons un Occlusion Propagation Volume, dont nous nous servons, pour calculer l'intégrale de la densité optique entre chaque source et chaque autre cellule contenant soit un voxel du milieu, soit un voxel issu d'une surface. Cette étape est intégrée à la boucle de rendu, ce qui permet d'animer le milieu participant ainsi que les sources de lumière sans contrainte particulière. Nous simulons tous types d'ombres : dues au milieu ou aux surfaces, projetées sur le milieu ou les surfacesThis thesis deals with modeling, illuminating and rendering participating media in real-time using graphics hardware. In a first part, we begin by developing a method to render heterogeneous layers of fog for outdoor scenes. The medium is modeled horizontally using a 2D Haar or linear/quadratic B-Spline function basis, whose coefficients can be loaded from a fogmap, i.e. a grayscale density image. In order to give to the fog its vertical thickness, it is provided with a coefficient parameterizing the extinction of the density when the altitude to the fog increases. To prepare the rendering step, we apply a wavelet transform on the fog's density map, and extract a coarse approximation and a series of layers of details (B-Spline wavelet bases).These details are ordered according to their frequency and, when summed back together, can reconstitute the original density map. Each of these 2D function basis can be viewed as a grid of coefficients. At the rendering step on the GPU, each of these grids is traversed step by step, cell by cell, since the viewer's position to the nearest solid surface. Thanks to our separation of the different frequencies of details at the precomputations step, we can optimize the rendering by only visualizing details that contribute most to the final image and abort our grid traversal at a distance depending on the grid's frequency. We then present other works dealing with the same type of fog: the use of the wavelet transform to represent the fog's density in a non-uniform grid, the automatic generation of density maps and their animation based on Julia fractals, and finally a beginning of single-scattering illumination of the fog, where we are able to simulate shadows by the medium and the geometry. In a second time, we deal with modeling, illuminating and rendering full 3D single-scattering sampled media such as smoke (without physical simulation) on the GPU. Our method is inspired by light propagation volumes, a technique whose only purpose was, at the beginning, to propagate fully diffuse indirect lighting. We adapt it to direct lighting, and the illumination of both surfaces and participating media. The medium is provided under the form of a set of radial bases (blobs), and is then transformed into a set of voxels, together with solid surfaces, so that both entities can be manipulated more easily under a common form. By analogy to the LPV, we introduce an occlusion propagation volume, which we use to compute the integral of the optical density, between each source and each other cell containing a voxel either generated from the medium, or from a surface. This step is integrated into the rendering process, which allows to animate participating media and light sources without any further constraintPARIS-EST-Université (770839901) / SudocSudocFranceF

    Efficient From-Point Visibility for Global Illumination in Virtual Scenes with Participating Media

    Get PDF
    Sichtbarkeitsbestimmung ist einer der fundamentalen Bausteine fotorealistischer Bildsynthese. Da die Berechnung der Sichtbarkeit allerdings äußerst kostspielig zu berechnen ist, wird nahezu die gesamte Berechnungszeit darauf verwendet. In dieser Arbeit stellen wir neue Methoden zur Speicherung, Berechnung und Approximation von Sichtbarkeit in Szenen mit streuenden Medien vor, die die Berechnung erheblich beschleunigen, dabei trotzdem qualitativ hochwertige und artefaktfreie Ergebnisse liefern
    corecore