758 research outputs found

    Space colonization for the procedural generation of lightning

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaThe procedural generation of geometry within the space of computer graphics has been a topic of study for quite some time, benefiting from a more unpredictable brand of randomness. Similarly, the exploration of lighting as a phenomenon within virtual space has been a field of study of comparable age. Despite its age and early adoption, there is a surprising lack of research in emulating the phenomenon of lighting past its interactions with the world. Most implementations of procedurally generated lightning within video games are based on randomized data trees. When part of the skybox, 2D meshes or textures are randomly selected from a pre-made pool. There are, however, methods based entirely on the dielectric breakdown model, using approximations to solve a Laplacian equation. This dissertation aims to present an alternative approach to the randomized and procedural generation of lightning bolts based on the Space Colonization algorithm. While the algorithm was first conceived for use in botanical applications, modeling the growth of biological structures, the similarities between the results produced by the dielectric breakdown model and botanic modeling algorithms coupled with the visual likeness of a lightning bolt and certain trees, made for solid groundwork upon which to establish this unique approach. As such, this work largely aims to be a first step into this particular realm, showing Space Colonization as a suitable algorithm for this specific purpose. That being said, a large portion of time was spent iterating, modifying and experimenting with ideas that were either discarded or adapted, an effort primarily dedicated towards controlling and stifling the possible growth of branches in ways beyond the reduction of attractors. The original algorithm was altered, focus put especially on the creation of a singular channel at a time, mixing discoveries from previous research with the work done on manipulating Space Colonization. Instead of the venation patterns observed with the original work, the stifling of any growth means that each node has a chance, when created, of sprouting a branch and each branch is, in turn, a different, modified instance of the same underlying concept providing an additional level of control. Effort was equally placed on showcasing different properties inherent to a lightning strike, such as its iterative construction when descending from its origin. In the rendering section, along with recreating the bloom and glow effect seen in previous works, effort was put into recreating the strobing observed in capturing slow-motion footage of lightning bolts with special detail given to this. In addition, parameters were joined with a waypoint system to allow for a great degree of freedom when generating new bolts.A geração iterativa de geometria no contexto de computação gráfica é um tópico de estudo à já algum tempo apesar de usado em apenas contextos específicos, um ramo que benefícia de um tipo de aleatoriedade imprevisível. Similarmente, a exploração de relâmpagos como um fenómeno em espaço virtual é uma faceta de idade comparável. Apesar disto, o foco quando tratando relâmpagos tem caído marioritariamente nos seus efeitos após impacto. Estudos têm sido conduzidos no âmbito de mitigar o dano causado por estes em fuselagem de aeronaves e analizar o impacto de trovoada em estruturas críticas. No entanto, existe uma falta de investigação sobre a emulação deste fenómeno barra as suas interações com o mundo. A maioria das implementações iterativas em video jogos são baseadas em árvores de dados. Quando fazem parte do cenário, são marioritariamente meshes ou texturas 2D selecionas aleatoriamente de um conjunto. Existem, no entanto, métodos baseados num modelo de colapso elétrico usando apróximações a uma equação de Laplace. Esta dissertação tem como foco apresentar uma alternativa para a geração aleatória e iterativa de relâmpagos baseada no algoritmo de Space Colonization. Apesar deste algoritmo ter sido concebido para uso botânico, modelando o crescimento de estruturas biológicas, as similaridades entre os resultados obtidos pelo modelo de colapso elétrico e estes algoritmos de modelagem, quando considerados com a semelhança entre certos relâmpagos e árvores, constroem uma fundação sólida para o tópico. Neste âmbito, este trabalho é um primeiro passo que tem o intuito de mostrar a capacidade do algoritmo de Space Colonization em simular relâmpagos. Dito isto, uma grande porção do tempo de desenvolvimento dobrou-se sobre a iteração modificação e experimentação de ideias que foram discardadas ou adaptadas, um esforço primariamente dedicado em controlar o crescimento de ramos sem reduzir o número de atratores. O algoritmo original foi alterado, focando especialmente na criação de um único canal e fazendo uso de conhecimento prévio, oriundo de trabalho e investigação feita sobre manipulação de Space Colonization. Em vez de padrões de venação, observados no trabalho original, o impedimento de qualquer crescimento significa que cada nodo tem uma probabilidade, quando criado, de dar origem a um ramo e que cada ramo é uma instância diferente e modificada do mesmo conceito, algo que cria um nível de controlo mais profundo. Um esforço extra foi, também, realizado com o intuito de mostrar todas as propriedades diferentes, inerentes a um relâmpago tal como a construção iterativa durante a sua travessia. Na parte de renderização, foram recriados efeitos de brilho e bloom vistos em trabalhos prévios. Foi também dada especial atenção à recriação do efeito estroboscópico observado durante a análise de imagens em câmera lenta, algo que se tornou no foco principal desta parte. Adicionalmente, a adição de parâmetros foi conjugada com um sistema de pontos que dá um grau superior de liberdade ao utilizador

    Real-time Realistic Rain Rendering

    Get PDF
    Artistic outdoor filming and rendering need to choose specific weather conditions in order to properly trigger the audience reaction; for instance, rain, one of the most common conditions, is usually employed to transmit a sense of unrest. Synthetic methods to recreate weather are an important avenue to simplify and cheapen filming, but simulations are a challenging problem due to the variety of different phenomena that need to be computed. Rain alone involves raindrops, splashes on the ground, fog, clouds, lightnings, etc. We propose a new rain rendering algorithm that uses and extends present state of the art approaches in this field. The scope of our method is to achieve real-time renders of rain streaks and splashes on the ground, while considering complex illumination effects and allowing an artistic direction for the drops placement. Our algorithm takes as input an artist-defined rain distribution and density, and then creates particles in the scene following these indications. No restrictions are imposed on the dimensions of the rain area, thus direct rendering approaches could rapidly overwhelm current computational capabilities with huge particle amounts. To solve this situation, we propose techniques that, in rendering time, adaptively sample the particles generated in order to only select the ones in the regions that really need to be simulated and rendered. Particle simulation is executed entirely in the graphics hardware. The algorithm proceeds by placing the particles in their updated coordinates. It then checks whether a particle is falling as a rain streak, it has reached the ground and it is a splash or, finally, if it should be discarded because it has entered a solid object of the scene. Different rendering techniques are used for each case. Complex illumination parameters are computed for rain streaks to select textures matching them. These textures are generated in a preprocess step and realistically simulate light when interacting with the optical properties of the water drops

    Photorealistic physically based render engines: a comparative study

    Full text link
    Pérez Roig, F. (2012). Photorealistic physically based render engines: a comparative study. http://hdl.handle.net/10251/14797.Archivo delegad

    Rendering of light shaft and shadow for indoor environments enhancing technique

    Get PDF
    The ray marching methods have become the most attractive method to provide realism in rendering the effects of light scattering in the participating media of numerous applications. This has attracted significant attention from the scientific community. Up-sampling of ray marching methods is suitable to evaluate light scattering effects such as volumetric shadows and light shafts for rendering realistic scenes, but suffers of cost a lot for rendering. Therefore, some encouraging outcomes have been achieved by using down-sampling of ray marching approach to accelerate rendered scenes. However, these methods are inherently prone to artifacts, aliasing and incorrect boundaries due to the reduced number of sample points along view rays. This study proposed a new enhancing technique to render light shafts and shadows taking into consideration the integration light shafts, volumetric shadows, and shadows for indoor environments. This research has three major phases that cover species of the effects addressed in this thesis. The first phase includes the soft volumetric shadows creation technique called Soft Bilateral Filtering Volumetric Shadows (SoftBiF-VS). The soft shadow was created using a new algorithm called Soft Bilateral Filtering Shadow (SBFS). This technique was started by developing an algorithm called Imperfect Multi-View Soft Shadows (IMVSSs) based on down-sampling multiple point lights (DMPLs) and multiple depth maps, which are processed by using bilateral filtering to obtain soft shadows. Then, down-sampling light scattering model was used with (SBFS) to create volumetric shadows, which was improved using cross-bilateral filter to get soft volumetric shadows. In the second phase, soft light shaft was generated using a new technique called Realistic Real-Time Soft Bilateral Filtering Light Shafts (realTiSoftLS). This technique computed the light shaft depending on down-sampling volumetric light model and depth test, and was interpolated by bilateral filtering to gain soft light shafts. Finally, an enhancing technique for integrating all of these effects that represent the third phase of this research was achieved. The performance of the new enhanced technique was evaluated quantitatively and qualitatively a measured using standard dataset. Results from the experiment showed that 63% of the participants gave strong positive responses to this technique of improving realism. From the quantitative evaluation, the results revealed that the technique has dramatically outpaced the stateof- the-art techniques with a speed of 74 fps in improving the performance for indoor environments

    The physics of streamer discharge phenomena

    Get PDF
    In this review we describe a transient type of gas discharge which is commonly called a streamer discharge, as well as a few related phenomena in pulsed discharges. Streamers are propagating ionization fronts with self-organized field enhancement at their tips that can appear in gases at (or close to) atmospheric pressure. They are the precursors of other discharges like sparks and lightning, but they also occur in for example corona reactors or plasma jets which are used for a variety of plasma chemical purposes. When enough space is available, streamers can also form at much lower pressures, like in the case of sprite discharges high up in the atmosphere. We explain the structure and basic underlying physics of streamer discharges, and how they scale with gas density. We discuss the chemistry and applications of streamers, and describe their two main stages in detail: inception and propagation. We also look at some other topics, like interaction with flow and heat, related pulsed discharges, and electron runaway and high energy radiation. Finally, we discuss streamer simulations and diagnostics in quite some detail. This review is written with two purposes in mind: First, we describe recent results on the physics of streamer discharges, with a focus on the work performed in our groups. We also describe recent developments in diagnostics and simulations of streamers. Second, we provide background information on the above-mentioned aspects of streamers. This review can therefore be used as a tutorial by researchers starting to work in the field of streamer physics.Comment: 89 pages, 29 figure

    Real-time Realistic Rain Rendering

    Get PDF
    Artistic outdoor filming and rendering need to choose specific weather conditions in order to properly trigger the audience reaction; for instance, rain, one of the most common conditions, is usually employed to transmit a sense of unrest. Synthetic methods to recreate weather are an important avenue to simplify and cheapen filming, but simulations are a challenging problem due to the variety of different phenomena that need to be computed. Rain alone involves raindrops, splashes on the ground, fog, clouds, lightnings, etc. We propose a new rain rendering algorithm that uses and extends present state of the art approaches in this field. The scope of our method is to achieve real-time renders of rain streaks and splashes on the ground, while considering complex illumination effects and allowing an artistic direction for the drops placement. Our algorithm takes as input an artist-defined rain distribution and density, and then creates particles in the scene following these indications. No restrictions are imposed on the dimensions of the rain area, thus direct rendering approaches could rapidly overwhelm current computational capabilities with huge particle amounts. To solve this situation, we propose techniques that, in rendering time, adaptively sample the particles generated in order to only select the ones in the regions that really need to be simulated and rendered. Particle simulation is executed entirely in the graphics hardware. The algorithm proceeds by placing the particles in their updated coordinates. It then checks whether a particle is falling as a rain streak, it has reached the ground and it is a splash or, finally, if it should be discarded because it has entered a solid object of the scene. Different rendering techniques are used for each case. Complex illumination parameters are computed for rain streaks to select textures matching them. These textures are generated in a preprocess step and realistically simulate light when interacting with the optical properties of the water drops

    Research reports: 1990 NASA/ASEE Summer Faculty Fellowship Program

    Get PDF
    Reports on the research projects performed under the NASA/ASEE Summer Faculty Fellowship Program are presented. The program was conducted by The University of Alabama and MSFC during the period from June 4, 1990 through August 10, 1990. Some of the topics covered include: (1) Space Shuttles; (2) Space Station Freedom; (3) information systems; (4) materials and processes; (4) Space Shuttle main engine; (5) aerospace sciences; (6) mathematical models; (7) mission operations; (8) systems analysis and integration; (9) systems control; (10) structures and dynamics; (11) aerospace safety; and (12) remote sensin

    Spin-scanning Cameras for Planetary Exploration: Imager Analysis and Simulation

    Get PDF
    In this thesis, a novel approach to spaceborne imaging is investigated, building upon the scan imaging technique in which camera motion is used to construct an image. This thesis investigates its use with wide-angle (≥90° field of view) optics mounted on spin stabilised probes for large-coverage imaging of planetary environments, and focusses on two instruments. Firstly, a descent camera concept for a planetary penetrator. The imaging geometry of the instrument is analysed. Image resolution is highest at the penetrator’s nadir and lowest at the horizon, whilst any point on the surface is imaged with highest possible resolution when the camera’s altitude is equal to that point’s radius from nadir. Image simulation is used to demonstrate the camera’s images and investigate analysis techniques. A study of stereophotogrammetric measurement of surface topography using pairs of descent images is conducted. Measurement accuracies and optimum stereo geometries are presented. Secondly, the thesis investigates the EnVisS (Entire Visible Sky) instrument, under development for the Comet Interceptor mission. The camera’s imaging geometry, coverage and exposure times are calculated, and used to model the expected signal and noise in EnVisS observations. It is found that the camera’s images will suffer from low signal, and four methods for mitigating this – binning, coaddition, time-delay integration and repeat sampling – are investigated and described. Use of these methods will be essential if images of sufficient signal are to be acquired, particularly for conducting polarimetry, the performance of which is modelled using Monte Carlo simulation. Methods of simulating planetary cameras’ images are developed to facilitate the study of both cameras. These methods enable the accurate simulation of planetary surfaces and cometary atmospheres, are based on Python libraries commonly used in planetary science, and are intended to be readily modified and expanded for facilitating the study of a variety of planetary cameras

    Efficient multi-bounce lightmap creation using GPU forward mapping

    Get PDF
    Computer graphics can nowadays produce images in realtime that are hard to distinguish from photos of a real scene. One of the most important aspects to achieve this is the interaction of light with materials in the virtual scene. The lighting computation can be separated in two different parts. The first part is concerned with the direct illumination that is applied to all surfaces lit by a light source; algorithms related to this have been greatly improved over the last decades and together with the improvements of the graphics hardware can now produce realistic effects. The second aspect is about the indirect illumination which describes the multiple reflections of light from each surface. In reality, light that hits a surface is never fully absorbed, but instead reflected back into the scene. And even this reflected light is then reflected again and again until its energy is depleted. These multiple reflections make indirect illumination very computationally expensive. The first problem regarding indirect illumination is therefore, how it can be simplified to compute it faster. Another question concerning indirect illumination is, where to compute it. It can either be computed in the fixed image that is created when rendering the scene or it can be stored in a light map. The drawback of the first approach is, that the results need to be recomputed for every frame in which the camera changed. The second approach, on the other hand, is already used for a long time. Once a static scene has been set up, the lighting situation is computed regardless of the time it takes and the result is then stored into a light map. This is a texture atlas for the scene in which each surface point in the virtual scene has exactly one surface point in the 2D texture atlas. When displaying the scene with this approach, the indirect illumination does not need to be recomputed, but is simply sampled from the light map. The main contribution of this thesis is the development of a technique that computes the indirect illumination solution for a scene at interactive rates and stores the result into a light atlas for visualizing it. To achieve this, we overcome two main obstacles. First, we need to be able to quickly project data from any given camera configuration into the parts of the texture that are currently used for visualizing the 3D scene. Since our approach for computing and storing indirect illumination requires a huge amount of these projections, it needs to be as fast as possible. Therefore, we introduce a technique that does this projection entirely on the graphics card with a single draw call. Second, the reflections of light into the scene need to be computed quickly. Therefore, we separate the computation into two steps, one that quickly approximates the spreading of the light into the scene and a second one that computes the visually smooth final result using the aforementioned projection technique. The final technique computes the indirect illumination at interactive rates even for big scenes. It is furthermore very flexible to let the user choose between high quality results or fast computations. This allows the method to be used for quickly editing the lighting situation with high speed previews and then computing the final result in perfect quality at still interactive rates. The technique introduced for projecting data into the texture atlas is in itself highly flexible and also allows for fast painting onto objects and projecting data onto it, considering all perspective distortions and self-occlusions
    corecore